Recently I was sitting across the table from the executive leadership team of a global healthcare services company, and I could tell by their body language that something was off. They’d invited me in to consult on their AI strategy, but they seemed defensive. The Chief Information Officer, a sharp guy who’d clearly done his homework on emerging tech, launched into a detailed explanation of their new large language model. It was impressive, no doubt. This model automated their insanely complex data entry process — think mountains of handwritten patient forms and a chaotic mix of digital files — and consolidated everything into a single, coherent record. They’d spent well over a year developing and testing it, and were understandably proud of their accomplishment. But as he talked about the rollout, it was obvious to me that they were already lagging behind.
“You’ve built an incredible foundation,” I said, “but this is just the starting line.” The energy in the room shifted. This wasn’t the reaction they were expecting. Here was a company that had invested heavily in AI, built a sophisticated system, and deployed a successful pilot. Like so many other executive leadership teams who’ve spent the past year building and implementing AI tools, they thought they were done. In reality, their transformation had only begun. AI is merely one facet of a sweeping technological change underway, and companies that fail to recognize the importance of other converging technologies risk being left behind.
During my meeting with the company’s executive team, I acknowledged that like so many leaders, the recent hyper-focus on AI, though late, was the right move. However, LLMs were just a starting point. With new developments happening at a breakneck speed, the company would need to build a new muscle for continuous transformation. That’s because AI is just one of three groundbreaking technologies shifting the business landscape. The other two — advanced sensors and biotechnology — are less visible, though no less important, and have been quietly advancing. Soon, the convergence of these three technologies is going to underpin a new reality that will shape the future decisions of every leader across industries.
I call this new reality “living intelligence”: systems that can sense, learn, adapt, and evolve, made possible through artificial intelligence, advanced sensors and biotechnology. Living intelligence will drive an exponential cycle of innovation, disrupting industries, and creating entirely new markets. Leaders who focus solely on AI without understanding its intersections with these two other technologies risk missing a wave of disruption already forming.
If AI is the everything engine, that engine needs data. Most likely, much of that data will come from advanced sensors and a network of interconnected devices that communicates and exchanges data to facilitate and fuel the advancement of AI. This function is why sensors are the next general-purpose technology — a fact that many leaders are currently missing.
Most people don’t realize that sensors are already everywhere, and are being put to use in multiple industries. It’s an understandable oversight; we often use technology without thinking about it. But once you start to look for them, they’re everywhere. For example, an iPhone comes embedded with a dozen sensors, ranging from proximity sensors to detect nearby objects, to face ID sensors to authenticate a user. All of them mine and refine your data, all day long. Xylem, a water technology company, developed a new type of water meter that leverages advanced sensors and AI to manage the challenges of water distribution in densely populated settings. The meters continuously measure water flow and provide granular data on consumption patterns; they can also identify anomalies in water flow, like drops in pressure or irregular usage patterns, that typically result from a leak. Meanwhile, a new class of biological sensors can be worn and ingested. Their purpose: to send and receive data in real time in order to diagnose and monitor disease, detect pathogens, and enable faster recovery. One such biosensor includes a subclass of tiny machines, called nanobots, that can monitor patient health in real time after being injected into the bloodstream. Acting as internal surveillance systems, nanobots can detect changes in environmental stimuli and conditions, allowing for continuous health monitoring and early diagnosis of potential health issues.
As more sensors surround us, they will capture and transmit not just more data, but more types of data. While organizations are busy creating and using LLMs, they will soon need to build LAMs: large action models. If LLMs predict what to say next, LAMs predict what should be done next, breaking down complex tasks into smaller pieces. Unlike LLMs that primarily generate content, LAMs are optimized for task execution, enabling them to make real-time decisions based on specific commands and will be enormously helpful in organizations of every size and scope. The earliest examples of LAMs are Anthropic’s Claude and ACT-1 from Adept.ai. Both are designed to directly interact with code and digital tools and perform actions within software applications like a web browser. LAMs are like LLMs, but with more data and multimodal requirements. They will use the behavioral data we generate when we use our phones or operate our vehicles, along with a constellation of sensors everywhere, all around us, collecting multiple streams of data at once from wearables, extended reality devices, the internet of things, the home of things, smart cars, smart offices, and smart apartments. As LAMs become more embedded in our environments, they will operate seamlessly, often without users’ direct engagement.
What so many organizations are failing to imagine is how LAMs will evolve to personal large action models, or PLAMs, and will eventually interact with different systems, learn from large datasets, and adapt to changing business needs. PLAMs will have the capacity to improve our digital, virtual, and physical experiences by streamlining decision-making, managing tasks, negotiating deals, and anticipating our needs based on behavioral data. They won’t need conscious input. These autonomous agents will be able to personalize recommendations, optimize purchases, and communicate with other trusted agents, allowing seamless transactions — all while maintaining a user’s privacy and preferences, since PLAMs, by definition, have access to all of the user data on personal devices.
In the near future, companies like Apple or Google will be motivated to embed even more smart sensors in devices to continuously collect and analyze personal data, such as health metrics, location data, and information about daily habits. All this data will be used to create highly individualized profiles that link to personal language and action models, tailored specifically to each user’s needs and preferences. While people will have PLAMs, corporations will likewise have one or more corporate large action models (CLAMs), and digital-forward governments will have government large action models (GLAMs).
Living intelligence’s third general purpose technology is bioengineering, which involves using engineering techniques to build biological systems and products, such as designer microbes, which can be engineered specific tasks. Right now, this is the easiest to dismiss, but in the longer-term it could prove to be the most important general-purpose technology. Paired with AI, bioengineering can create “generative biology” (genBio), which uses data, computation, and AI to predict or create new biological insights — generating new biological components, such as proteins, genes, or even entire organisms, by simulating and predicting how biological elements behave and interact.
We can already see the potential of this technology. Companies like Ginkgo Bioworks are using genBio to design and create custom enzymes that can be applied in industrial processes. For example, generative algorithms help engineer enzymes that break down complex molecules, such as plastics or other pollutants. Google DeepMind created AlphaProteo, which designs completely new proteins with specific new properties that could have applications in biomaterials and drug development. Another project from DeepMind, a tool called GNoME(Graph Networks for Materials Exploration) has already predicted the stability of millions of new inorganic materials. Imagine a building made of materials that can autonomously self-regulate temperature, light, and ventilation — without a computer (or a human) in the loop.
Farther out, living intelligence could lead to living machines. Organoid intelligence (OI) made its debut as a new field of science in 2024. OI uses lab-grown tissues, such as brain cells and stem cells, to create biological computers that mimic the structure and function of the human brain. An organoid is more or less a tiny replica of tissue that functions like an organ of the body. In 2021, researchers at Cortical Labs in Melbourne, Australia, made a miniature organoid brain that worked like a computer. They called it DishBrain, attached it to electrodes, and taught it how to play the 1980s video game Pong. DishBrain is made of about 1 million live human and mouse brain cells grown on a microelectric array that can receive electrical signals. The signals tell the neurons where the Pong ball is located, and the cells respond. The more the system plays, the more it improves. Cortical Labs is now developing a new kind of software, a biological intelligence operating system, which would allow anyone with basic coding skills to program their own DishBrains.
Although living intelligence may seem like a futuristic idea, forward-thinking CEOs and business leaders cannot afford to wait. We’re already seeing the signs of convergence in living intelligence technologies across several leading-edge industries. Early adoption is happening most intensively in industries like pharmaceuticals, medical products, health care, space, construction and engineering, consumer packaged goods, and agriculture. But applications are coming to other industries soon, creating novel white spaces of opportunity in industries like financial services. As additional industries jump on board, innovation will disperse much more broadly, fueling additional flywheel effects.
We’re going to see a compounding advancement as each technology improves. Here are five recommendations for how to act with diligence and urgency:
Perhaps the most valuable recommendation I can offer is to simply ask, “What if?” At my next meeting with the healthcare company, I asked the executive leadership team to consider scenarios for how their business might transform in the coming decade as living intelligence matures. What if there was a “health assurance” subscription package that includes wearable sensors, AI-powered diagnostics, and personalized medication delivery? What if traditional providers were bypassed entirely, and startups used AI and sensor data to offer personalized health solutions directly to consumers? What if today’s bathroom is tomorrow’s diagnostic lab? What if real-time data led to real-time reporting on patient outcomes? Would there be a shift to outcome-based pricing? Meaning, would providers get paid based on the effectiveness of their treatments? All of these represent a significant shift in value generation.
Resist the temptation to fixate on AI as it exists today, I told the team. Take a more holistic view of the change already underway, and prepare your organization for the era of living intelligence.
(This article was originally published in the Harvard Business Review, by Amy Webb, that reserves all the rights. To read the original article please visit here.)
Photo Credit: Andriy Onufriyenko/Getty Images