anvalcq.bsky.social
@anvalcq.bsky.social
Reposted
Terminal-Based Web Browsing with Modern Conveniences
Terminal-Based Web Browsing with Modern Conveniences
Hackaday Article
hackaday.com
January 2, 2026 at 3:04 AM
Reposted
Waterfox is a free and opensource browser based on Firefox with all LLM or AI stuff removed. This might be the replacement.

Repo github.com/BrowserWorks...

Home page www.waterfox.com

Blog post www.waterfox.com/blog/no-ai-h...
December 17, 2025 at 7:49 AM
Reposted
Cisco defines AI security framework for enterprise protection
Cisco has rolled out an AI Security and Safety Framework it hopes will help customers and the industry get out in front of what is expected to be a potential flood of adversarial threats, content safety failures, model and supply chain compromise, and agentic behavior problems as AI becomes an integral part of the enterprise network. With AI, humans, organizations, and governments cannot adequately comprehend or respond to the implications of such rapidly evolving technology and the threats that ensue, wrote Amy Chang, leader, threat and security research in Cisco’s AI Software and Platform group, in a blog about the new Integrated AI Security and Safety Framework. “Organizations are deploying systems whose behavior evolves, whose modes of failure are not fully understood, and whose interactions with their environment are dynamic and sometimes unpredictable,” Chang stated. The framework is Cisco’s bid to define the common language for AI risk before attackers and regulators do, according to the vendor. The framework represents one of the first holistic attempts to classify, integrate, and operationalize the full range of AI risks. This vendor-agnostic framework provides a structure for understanding how modern AI systems fail, how adversaries exploit them, and how organizations can build defenses that evolve alongside capability advancements, Chang wrote. The AI Security and Safety Framework is built on five elements that comprise an evolving AI threat landscape: the integration of AI threats and content harms, development lifecycle awareness, multi-agent coordination, multimodality, and audience-aware utility. Further detail includes: **Threats and harms** : Adversaries exploit vulnerabilities across both domains, and oftentimes, link content manipulation with technical exploits to achieve their objectives. A security attack, such as injecting malicious instructions or corrupting training data, often culminates in a safety failure, such as generating harmful content, leaking confidential information, or producing unwanted or harmful outputs, Chang stated. The AI Security and Safety Framework’s taxonomy brings these elements into a single structure that organizations can use to understand risk holistically and build defenses that address both the mechanism of attack and the resulting impact. **AI lifecycle** : Vulnerabilities that are irrelevant during model development may become critical once the model gains access to tooling or interacts with other agents. The AI Security and Safety Framework follows the model across this entire journey, making it clear where different categories of risk emerge and how they may evolve, and letting organizations implement defense-in-depth strategies that account for how risks evolve as AI systems progress from development to production. **Multi-agent orchestration** : The AI Security and Safety Framework can also account for the risks that emerge when AI systems work together, encompassing orchestration patterns, inter-agent communication protocols, shared memory architectures, and collaborative decision-making processes, Chang stated. **Multimodal threats** : Threats can emerge from text prompts, audio commands, maliciously constructed images, manipulated video, corrupted code snippets, or even embedded signals in sensor data, Chang stated. As we continue to research how multimodal threats can manifest, treating these pathways consistently is essential, especially as organizations adopt multimodal systems in robotics and autonomous vehicle deployments, customer experience platforms, and real-time monitoring environments, Chang stated. **Audience-aware** : Finally, the framework is intentionally designed for multiple audiences. Executives can operate at the level of attacker objectives, security leaders can focus on techniques, while engineers and researchers can dive deeper into sub techniques. Drilling down even further, AI red teams and threat intelligence teams can build, test, and evaluate procedures. All of these groups can share a single conceptual model, creating alignment that has been missing from the industry, Chang stated. The framework includes the supporting infrastructure, complex supply chains, organizational policies, and human-in-the-loop interactions that collectively determine security outcomes. This enables clearer communication between AI developers, AI end-users, business functions, security practitioners, and governance and compliance entities, Chang stated. The framework is already integrated into Cisco AI Defense package, Chang stated. Cisco’s AI Defense package offers protection to enterprise customers developing AI applications across models and cloud services. It includes four key components: AI Access, AI Cloud Visibility, AI Model and Application Validation, and AI Runtime Protection. There are additional model context protocol (MCP), agentic, and supply chain threat taxonomies embedded within the AI Security Framework. Protocols like MCP and A2A govern how LLMs interpret tools, prompts, metadata, and execution environments, and when these components are tampered with, impersonated, or misused, benign agent operations can be redirected toward malicious goals, Chang stated. “The MCP taxonomy (which currently covers 14 threat types) and our A2A taxonomy (which currently covers 17 threat types) are both standalone resources that are also integrated into AI Defense and in Cisco’s] open source tools: [MCP Scanner and A2A Scanner. Finally, supply chain risk is also a core dimension of lifecycle-aware AI security. We’ve developed a taxonomy that covers 22 distinct threats and is simple,” Chang said. Cisco isn’t only vendor to offer an AI security framework. AWS, Microsoft Azure, Palo Alto Networks, and others have frameworks as well, but Cisco says they are missing key coverage areas. “For years, organizations that attempted to secure AI pieced together guidance from disparate sources. MITRE ATLAS helped define adversarial tactics in machine learning systems. NIST’s Adversarial Machine Learning taxonomy described attack primitives. OWASP published Top 10 lists for LLM and agentic risks. Frontier AI labs like Google, OpenAI, and Anthropic shared internal safety practices and principles. Yet each of these efforts focused on a particular slice of the risk landscape, offering pieces of the puzzle but stop short of providing a unified, end-to-end understanding of AI risk,” Change wrote. Change stated that no existing framework covers content harms, agentic risks, supply chain threats, multimodal vulnerabilities, and lifecycle-level exposure with the completeness needed for enterprise-grade deployment. The real world does not segment these domains, and adversaries certainly do not either, Chang stated.
www.networkworld.com
December 17, 2025 at 8:51 PM
Reposted
I wrote CISA’s Cyber Guidance for Small Businesses at the start of 2023. I’m biased, but I still think it’s one of the best starting points for organizations that aren’t sure how to begin a cybersecurity program. But if you could change one thing, what would it be?

www.cisa.gov/cyber-guidan...
Cyber Guidance for Small Businesses | CISA
Cyber incidents have surged among small businesses that often do not have the resources to defend against devastating attacks like ransomware. The security landscape has changed, and our advice needs…
www.cisa.gov
December 17, 2025 at 6:00 PM
Reposted
P4 programming: Redefining what’s possible in network infrastructure
Network engineers have spent decades working within rigid constraints. Your switch vendor decides what protocols you can use, what features you get, and when you get them. Need something custom? You’re out of luck. That’s changing, and P4 is a primary driver. P4 lets you program the data plane, the part of switches and SmartNICs that actually moves packets. This isn’t theoretical. Organizations are running P4 in production today, handling real traffic for applications that can’t wait years for vendor feature requests to materialize. If you’re planning network infrastructure for the next five to ten years, understanding P4 isn’t optional anymore. ## What P4 actually does The core idea is simple: separate the control plane (decides where packets go) from the data plane (moves packets there), then make the data plane programmable. OpenFlow did the first part. P4 takes it further by letting you define how packets get processed, not just where they go. Think about traditional network hardware. It knows Ethernet, IP, TCP, UDP, maybe VXLAN if you’re lucky. Send it a packet with a custom header format? The device treats everything after the outer headers as opaque payload. You can’t route based on your custom fields. You can’t modify them. You’re stuck. With P4, you write the parser yourself. You tell the switch or SmartNIC exactly what your custom protocol looks like: where each field starts, how long it is, what values matter. Then you define match action rules. If this field equals X, do Y. The device compiles your program and executes it on every packet at line rate. Here’s what makes this powerful: you’re not limited to protocols that existed when the hardware shipped. Need to support a new encapsulation format next month? Write the parser, compile, deploy. No firmware update. No vendor involvement. No waiting. ## Real problems P4 solves ### Visibility that actually tells you something Traditional monitoring gives you SNMP counters (updated every 30 seconds, way too slow) or NetFlow samples (statistically useful but incomplete). Neither tells you what happened to a specific transaction at a specific moment. P4 changes this completely. Your switches and SmartNICs can add metadata to packets as they flow through timestamps, queue depths and congestion indicators. The application receiving the packet gets real data about what happened in the network. A database query that normally takes 5ms suddenly takes 50ms? You know exactly which device had congestion, when it happened, and how bad it was. Real example: A retail company deployed P4 telemetry on both their switches and server SmartNICs before Black Friday. Their traditional monitoring showed everything looked normal. Average latency within bounds, no packet loss. But P4 telemetry revealed that 2% of shopping cart transactions were hitting 500ms delays. Turned out specific switch ports had misconfigured buffers that only showed up under bursty traffic. They found and fixed it before it became a revenue problem. Their old monitoring system would’ve completely missed this. ### Security at every layer Most networks handle DDoS protection with dedicated appliances. Expensive boxes positioned at chokepoints. P4 moves that protection everywhere, from the network fabric to the server edge. Simple example: DNS amplification attacks. A P4 program on a SmartNIC tracks query-to-response ratios per source IP. See 1 query and 50 responses? That’s amplification. Drop the responses automatically before they even reach the server CPU. The SmartNIC maintains state, makes decisions, and acts. All at wire speed while forwarding legitimate traffic normally. More advanced implementations get really interesting. One financial services company uses P4 on SmartNICs to enforce API call sequences at the server edge. You must call their authentication endpoint first, then data endpoints, then logout. Try to grab data without authenticating? The P4 program drops your packets immediately at the NIC, before consuming any server resources. It’s maintaining per-connection state machines, something very hard to achieve with traditional fixed-function switches and NICs. ### Offload and acceleration SmartNICs running P4 can offload network functions from server CPUs. Encryption, encapsulation, load balancing and traffic shaping are all handled at the NIC before packets reach the host. This frees up CPU cycles for actual application workloads. One cloud provider deployed P4 SmartNICs across their compute fleet to handle VXLAN encapsulation and security policy enforcement. Result: 30% reduction in CPU overhead for networking, which translated directly into more capacity for customer workloads. The same hardware, just programmed differently. ### Deploy new protocols in months, not years Large cloud operators have implemented custom congestion control protocols optimized for their data center traffic patterns. Rolling that out with traditional hardware would take years. You need switches and NICs that understand the new packet format. With P4, they wrote the parser and forwarding logic, compiled it, and pushed it to existing hardware. Design to production: months. This pattern applies broadly. Custom load balancing schemes, experimental transport protocols, new overlay formats. All deployable on hardware you already own through P4 programming. ## The parts nobody talks about (until something breaks) ### Hardware doesn’t have infinite resources P4 programs run on ASICs and FPGAs with real physical constraints. Match action tables hold thousands to maybe a few millions of entries, not billions. Stateful operations have size limits. Packet modifications must complete in nanoseconds, not microseconds. I’ve seen engineers design beautiful table hierarchies that look perfect on paper, then discover their target hardware doesn’t have enough TCAM. The program compiles fine. It just won’t load. That’s a bad day. This applies whether you’re programming a top-of-rack switch or a server SmartNIC. Best approach: know your hardware intimately before you write code. Understand table sizes, match types (exact vs. ternary vs. LPM), action complexity limits. Design within those bounds from the start. Vendor data sheets and P4 target documentation should be reviewed early to avoid late surprises. ### Testing isn’t optional, it’s survival A buggy P4 program drops packets. Or worse, forwards them incorrectly. You absolutely cannot “try it and see” in production. Testing infrastructure is mandatory. The P4 behavioral model (BMv2) lets you run your program in software. Send test packets through, verify behavior, before touching real hardware. Your test cases need to cover normal traffic, edge cases, malformed packets, and attack scenarios. Add negative tests for parser error paths and table miss behavior; these are common sources of field issues. One company I know runs 10,000+ test cases on every P4 program change. Sounds excessive until you hear they caught 43 bugs in one update, any of which would’ve caused an outage. Testing saved them. ### Portability takes real work Different hardware targets support different P4 features. Your program might use 32 match action stages, but some devices only support 16. Hash functions vary. Packet modification capabilities differ. Supported protocols aren’t consistent. Perfect portability is a fantasy. Instead, maintain a core P4 program with target-specific adaptations. Use compiler directives and modular design so platform differences stay isolated in small sections. Accept that some advanced features won’t work everywhere. A switch ASIC and a SmartNIC FPGA will have different capabilities. Where feasible, align control plane integration on P4Runtime to reduce vendor lock-in at the API layer. ## How to actually deploy this ### Start small and specific Don’t try to replace your entire network on day one. Pick one use case where P4 delivers clear value. Deploy capable hardware in targeted locations. Maybe SmartNICs for critical application servers, or ToR switches for specific traffic patterns, or edge routers needing custom traffic engineering. Pattern that works well: deploy P4 hardware in monitoring mode initially. SmartNICs and switches watch traffic and generate telemetry, but don’t affect forwarding. Operations teams build confidence with low risk. Then, gradually add forwarding logic and policy enforcement. Track success metrics such as latency percentiles, CPU offload, and incident mean time to resolution to justify expansion. ### Design for hybrid deployments Not everything needs programmable processing. Run P4 capable hardware for traffic requiring custom logic. Use conventional devices for high-volume standard traffic. Example: equip database servers with P4 SmartNICs that implement custom congestion control and security policies. Standard web servers use regular NICs. Machine learning training clusters get P4 switches with specialized flow handling. Standard office traffic uses regular switches. You get P4’s benefits precisely where they matter, while controlling cost and complexity. ### Think about control planes P4 programs implement the data plane. Something else has to populate those match action tables. That’s your control plane. Options include traditional routing protocols, SDN controllers, or custom applications. Many deployments use SDN controllers that translate high-level policies into table entries pushed to switches and SmartNICs. The controller understands topology and requirements. The P4 program executes forwarding efficiently. Separating concerns keeps complexity manageable. Standardizing on P4Runtime for table programming and using gNMI for device telemetry and configuration can simplify multi-vendor control plane design. ## Building the team skills ### It’s not just network engineers P4 programming needs hybrid expertise: deep protocol knowledge plus software development skills. Network engineers have to learn programming. Software developers have to learn networking internals. Training should cover P4 language basics, hardware architectures (both switches and SmartNICs), testing methods, and debugging. Hands-on labs with BMv2 and real hardware are essential. Budget 4 to 6 months for engineers to become productive. Early on, consider cross-functional teams: network architects who understand requirements paired with developers who write clean code. Over time, people develop both skillsets. ### Treat it like real software Use version control. Do code reviews. Run automated testing. Deploy in stages. One company’s workflow: develop in BMv2, test on lab hardware, deploy to staging environment, monitor for 48 hours, then production rollout to switches and SmartNICs. Keep rollback procedures ready. P4 programs update without hardware changes, but you need to reverse quickly if problems emerge. Blue-green deployments or canary strategies work well for P4 rollouts in production. ## Where this goes next Hardware support is expanding rapidly. More switch vendors and SmartNIC manufacturers are shipping P4-capable platforms. Tooling is maturing. We’ll see tighter integration with intent-based networking. High-level business policies automatically generate P4 programs deployed across the infrastructure. Machine learning will consume P4 telemetry from switches and SmartNICs to optimize traffic in real time. New protocols will emerge that assume P4’s flexibility instead of fighting hardware constraints. Server-side processing will increasingly leverage SmartNIC offload for network-intensive workloads. For network architects, the question isn’t whether to adopt P4. It’s when and how. Organizations building P4 capability now gain real competitive advantage: faster feature deployment, better visibility, stronger security, networks that adapt to business needs instead of constraining them. Yes, this requires investment. Hardware, skills, development processes. But the alternative means staying constrained by vendor roadmaps in an era where network agility increasingly determines business success. P4 offers a way out of those constraints, if you’re willing to rethink how network infrastructure works. The transition won’t be easy. Nothing this fundamental ever is. But the organizations making this shift now, deploying P4 on both switches and SmartNICs across their infrastructure, will help define what “modern networking” means for the next decade. The rest will spend that decade catching up. **This article is published as part of the Foundry Expert Contributor Network. ****Want to join?**
www.networkworld.com
December 13, 2025 at 3:15 AM
Reposted
Is the recent WhatsApp incident the largest data leak in history? Here’s our take: threema.com/bp/whatsapp-...
On WhatsApp’s Recent Data Leak
This week, various media outlets reported that WhatsApp exposed user data on a massive scale. Some called it the “largest data leak in history,” others said the situation to be worse than one might th...
threema.com
November 21, 2025 at 3:35 PM
Reposted
You requested it, now it’s here - introducing Proton Sheets! The privacy-first alternative to Excel and Google Sheets.

Spreadsheets form the framework of modern businesses, and with Proton Sheets, you can ensure your data is private and secure.

1/4
December 4, 2025 at 12:30 PM
Reposted
Andrew Wheeler of HPE Labs: Being a constant learner is key to being a good technologist
_For the last five years,Andrew Wheeler has been in charge of HPE Labs, which is focused on driving innovative technologies from R&D to commercialization. The senior vice president and director of HPE Labs was in Barcelona this week, where HPE Discover 2025 took place. Wheeler sat down with Esther Macías, chief editor of Computerworld Spain, to talk about the projects being developed under his wing. He reflected on the future of more embryonic technologies, such as quantum computing, as well as the rapid evolution of artificial intelligence, which, he predicts, “will completely change the way companies operate.”_ **You have been leadingHPE Labs for five years, although in reality you has been with the project for much longer, and even longer with the company. What has your experience been these last few years, which were marked by unprecedented technological developments in the fields of computing and artificial intelligence?** The evolution has been really interesting. I have spent my entire professional career in areas dedicated to research and product development. I’ve been doing this for 31 years, working on a wide variety of aspects, from a lot of silicon and system design, to business-critical technology development, high-performance computing, hyperconvergence… And if we count cloud-related projects, the list would be endless. That’s probably why I’ve stayed with the same company, because we do all kinds of different things here. When I joined, in fact, I joined the labs organization, the applied research group that serves the entire company. The fascinating thing about this center is the breadth with which it operates with the technology that we are always exposed to. In fact, before the company split in two, in the old HP Labs we did literally everything from inkjet and other printing projects to supercomputing. And all on a large scale. I found that very interesting, the level of breadth we were working with. But to the question of how the landscape has changed over the last few years, well, slowly but steadily, we’ve been converging to get to where we are today. Now our focus areas are networking, cloud and artificial intelligence. All of HPE’s business and the reorganization that the company has done has been channeled into these three areas, so our work has gone from being very broad to being very specialized. **Now your focus is on the areas that HPE’s own management emphasizes: networking, cloud and AI. How has the latter changed? Is it changing the landscape of the former? How do you see it from your perspective as a researcher?** Our perspective —and what we work on every day — is focused on the application of AI. That’s our current focus and it’s developing on two fronts. One is the application of this technology internally, that is, applied to our supply chain organization, to support, to the organization itself, finance, marketing and, of course, to the product teams. All areas of HPE are now asking how AI can make us more efficient as a company and improve our customers’ experience. The other front is enabling AI, whether in hardware, networking, storage or hybrid cloud, i.e. equipping our products to support this technology and vice versa. **The buzz generated by AI, especially its new flavors, generative and agentive, in the market is impressive. Would you say there is a bubble around these technologies?** I hate to say yes, but there may be a bubble, although a natural correction will eventually occur. The big users of AI, the creators of the models, which are few in number in essence, will spend whatever it takes to maintain leadership in this space. But at the overall enterprise level, it’s all about ROI; then it’s about organizations implementing smaller models, albeit leveraging the big ones. We’re talking, of course, about agentive AI, which is what we’re largely focused on. But going back to the initial thread, and being aware that we are in the expectation phase and that a balance will be reached, there is no doubt that artificial intelligence is going to completely change the way companies operate. This much is clear. > **_“The very broad impact it will have is the disruptive potential of AI.”_** **Would you say it is the most disruptive technology today?** Yes, I would say it is because of the very broad impact it will have, that’s its disruptive potential. And while there may be a bubble, it’s here to stay. As they say, the genie is out of the bottle. **It has been announced that, together with Nvidia, HPE will launch in Grenoble (France) what they call AI Factory Lab, with the aim of “responding to the needs of customers seeking greater control and autonomy over their AI infrastructure and data.” The Labs are involved in this project; what are you pursuing in particular?** The idea is to have an environment where companies can come and test AI projects. It gives them a testing ground, a sandbox environment, where they can start exploring. Not every company can have AI supercomputers. This is great for power users and people working with large models, but many companies don’t need something that big. **You mentioned earlier that one line of work for the Labs is to integrate AI into HPE’s technology offering. What are the big challenges you face in this integration?** There are many considerations to take into account. The first is from an ethical point of view. In fact, the first challenge we faced was to provide our engineers and product developers with a manual or a set of principles that they had to comply with. We spent 18 months putting together our ethical AI principles, which basically consist of a checklist of five or six points at the highest level that ensure that, if they are met, the final product meets the standards set. This is an issue that we have taken very seriously, conducting annual trainings for our employees on AI ethics. This ensures that we continue to develop and deploy technology and projects in the right way. Having established the AI ethics framework for the company, we address the next challenge, relating to the user experience. Here, as seen at the Discover event in Barcelona, we already have things in place, such as agent technology, AI operations in networks or in Greenlake, the projects we are already tackling in storage…. A large part of the engineering community feels very comfortable moving full speed ahead in this area. However, as I said earlier, much of the application of AI and the associated benefits will come with the adoption of this technology internally. And here we may not have the same experience in areas such as the finance or support organization. My next challenge is how to equip and train the vast majority of users. In the end, as Antonio [Neri, HPE’s CEO] himself says in all his conferences, everyone, regardless of the discipline or function they have in the company, will have a specialization in AI, and this is what we are working on. **Another technology that may be disruptive but more long-term is quantum computing. How do you see this topic? What is the reality?** Quantum is another example of a technology around which there is a lot of over hype and promotion by industry. That said, it has incredible potential. The first thing to keep in mind is that quantum computing is not mainstream. There is not going to be a quantum laptop to replace the one we have to do our day-to-day work. That’s not how it works. It’s an accelerator to drive specific forms of computing that would otherwise be infeasible. A good analogy, to understand this, is the current use of GPUs. We don’t use a GPU just for general purpose computing. We don’t, we do use some aspects of it, perhaps in the phone, to help render the screen, etc. What do we do today with a GPU or a quantum accelerator? It usually sits alongside the traditional CPU and acts as a quantum computing offload engine. That, ultimately, is how it will go into production: with a hybrid system that has what we call classical computing, CPU and GPU, but to which we will add a quantum accelerator to which we will have networked; then we will have a workflow that, ideally, will abstract a lot of that complexity, because that’s the nature of computing. That’s been the evolution since assembly language programming. The idea is how to make this technology more accessible for mass development. **Then the key,as other companies like IBM have already said, is a combination of quantum technology with classical technology.** Right, yes. And then it will be implemented in centers like the supercomputing center in Barcelona or wherever; that’s how it will ultimately be used, especially at the beginning, because it will be very sparse computing in terms of capacity. And expensive, so you will want to do as much as possible with CPUs and GPUs, and then use quantum computing only for what quantum computing can uniquely do. That will be the recipe. Andrew Wheeler, director of HPE Labs, speaking at the Discover 2025 event in Barcelona, Spain. HPE > **_“Quantum computing is not general-purpose […]. It is an accelerator to drive specific forms of computing that would otherwise be unfeasible_**.” **Together with seven other technology organizations you have launched an alliance, called the Quantum Scaling Alliance, which aims to “make quantum computing scalable, practical and transformative across industries.” The project is co-led by Masoud Mohseni of HPE Labs and John Martinis, the 2025 Nobel Laureate and co-founder and CTO of Qolab. What can you tell us about it?** The idea is to bring together companies, organizations and universities that really want to develop and build that practical quantum system that encompasses everything from silicon and fabrication to the algorithm, or in other words, from building the cubits to building the circuits, creating the control systems and developing the algorithms that can take advantage of it. Only collectively can this kind of effort be achieved, because I don’t think any one company alone is really in a position to overcome all the technological hurdles that are going to come up in quantum technology. It is a unique ecosystem. **Creating ecosystem, the current big focus of the tech companies.** Yes, it is needed, and they need to be open ecosystems, because that is what will accelerate progress. On the other hand, you have to ensure that there are standards and interfaces for new components to work together. That’s the only way to succeed. **What would you say is the difference between the innovation strategy you have at HPE Labs and the labs of other big tech companies like, for example, Alphabet/Google, Microsoft, AWS or IBM?** I would say we focus more on applied research, which means we don’t do basic research, aimed only at publishing papers or getting patents. Ours is very focused research that is applied to the company’s strategy. However, we collaborate well in the innovation ecosystem and, in some cases, with universities that help us accelerate or augment our own knowledge or lack thereof. We also collaborate with customers or partners closely, with the aim of working on proofs of concept that can then be taken to market. The close connection we have with the innovation ecosystem as a whole is probably what differentiates us in the way we operate. **Can you tell us how many people work in your labs?** No, sorry. But I can say that we have a presence all over the world, wherever we have teams, like in the UK and also in Europe, plus, of course, the Asia region and the United States. **Not in Spain?** No, but when we were HP we did have a strong presence in Barcelona, so we have a very rich history in Spain [the new HP still has a strong lab in the city]. > **_“No single company on its own is in a position to overcome the technological hurdles that are going to come in quantum technology.”_** **What are your challenges at the helm of HPE Labs?** To say no. Because, although our focus areas are _networking_ , cloud and AI, the reality is that we also work in other related areas such as sustainability and security. We are a team in high demand, working hand in hand with the product teams, and sometimes we have to say no to requests that come to us. Finding the balance is a challenge. We could do with more people. **In your division, the Labs, you have a program to encourage technical careers. What would you recommend to young talent?** Several things. First, don’t get too identified with one field, for example, systems management, because the wonderful thing about this market is the breadth of it. They have to avoid becoming stagnant and obsolete, pursue their goals, improve and learn. Yes, especially the latter, to be an eternal learner. That I would say is the main thing to be a good technologist. _This articleoriginally appeared on Computerworld Spain_.
www.networkworld.com
December 5, 2025 at 11:38 PM
Reposted
First spotting of the Spanish edition of EMPIRE OF AI — EL IMPERIO DE IA — in the wild 😍😍

Happy Spanish edition pub week to me 🥳🥳🥳
Libro recibido: El imperio de la IA. Sam Altman y su carrera por dominar al mundo, de Karen Hao @karenhao.bsky.social
Gentileza: #EdicionesPenínsula
#ElImperioDeLaIA #EmpireOfAI #KarenHao #SamAltman #OpenAI
www.planetadelibros.com/libro-el-imp...
November 21, 2025 at 3:32 PM
Reposted
Consider joining the GrapheneOS community in our official forum and chat rooms!

▫️ Forum: discuss.grapheneos.org
🔹 Discord: grapheneos.org/discord
🔹 Matrix: matrix.to#/#community:...

[🔹= Bridged]
GrapheneOS Discussion Forum
GrapheneOS discussion forum
discuss.grapheneos.org
January 2, 2025 at 6:54 PM
Reposted
Please consider nominating @grapheneos.org for this year's @proton.me fundraiser.

Donations are what fund our work on upcoming features & improvements, maintaining our current ones, & the upkeep of our infrastructure.

Details:
discuss.grapheneos.org/d/28065

Form:
proton.me/blog/lifetim...
Proton Foundation has launched their 8th edition Lifetime Fundraiser - GrapheneOS Discussion Forum
GrapheneOS discussion forum
discuss.grapheneos.org
November 14, 2025 at 5:38 AM