Skip to main content

Introducing Homo augmentus

An illustration of a human silhouette with a brain and a gear inside

My day starts with that most essential morning need: coffee. Sitting up in my bed, I hear the whirr of beans being ground downstairs – lasting a little longer than normal. The digital display of my bedside clock shows what I already know: I didn’t sleep well last night, and my digital assistant thinks an extra dose of caffeine will help get me going. Downstairs I’m savoring my morning cup, when my wife arrives in the kitchen and says “news.” A monitor in the corner immediately pops on showing the local news-station weatherman. She flicks two fingers in the air to the right, and the channel changes to CNN. An additional finger flick upwards raises the volume.

I turn my attention to the day ahead as I have a big presentation coming up at work. Earlier this week, I met with my boss to discuss the details, and now seems like a good time to go over them. Donning a tiny wireless earpiece and smart glasses, I say “recall conversation with Sean on Tuesday.” The kitchen disappears as does the voice of the CNN anchor, and I find myself in Sean’s office. I fast forward the discussion for a few seconds to get to the relevant bit where Sean is showing me some key diagrams on the whiteboard in his office. I focus my gaze on the diagrams and my field of vision immediately zooms in on the virtual renderings. A few blinks of my eyelids snap a few pictures, which are then loaded into a presentation template that will be waiting for me on my PC when I arrive at work.

I re-emerge in my kitchen to find my wife waiting for me to join her on our daily walk to the train station. We make way our way outside and encounter a cacophony of honking cars and construction. Unperturbed, I turn to my wife to ask, “what’s for dinner?” The jarring background noise immediately dissipates, as our earpieces isolate our voices from ambient sounds. My wife says she was planning on trying out a new chicken recipe. I jokingly comment on how dry her chicken dishes are before realizing the sheer magnitude of my mistake. One hard glare later, our conversation is cut off, the street noises return, and I find myself responsible for making dinner tonight.

On the train, I begin to think about the favorite chicken dish my grandmother prepared when I was a kid. That recipe must be somewhere. Under my breath I mutter “Grandma’s chicken piccata.” Within moments a digital assistant finds an old photo I had snapped of my grandmother’s recipe book when she wasn’t looking. With a few gestures, my assistant generates a shopping list, sends it to Instacart and schedules a delivery drone to arrive at my home this evening right after I do. The recipe is uploaded to my kitchen monitor along with a few helpful YouTube videos on how to bread and pan-fry chicken breasts.

At the office, I dive into my presentation, and as the workday comes to a close I get an alert on my watch. It informs me that there has been an increase in inflammation biomarkers in my body over the last 12 hours and my contact tracing history suggests an 85% chance I’ve been exposed to a rhinovirus in the last 2 days. In other words, a cold is coming on. It’s nothing serious yet, and I feel fine, but just in case I plant my thumb on the watch face, giving the explicit biometric authorization necessary for my doctor to remotely monitor my vitals and biochemistry for the next few days. If this cold blows up into something worse, at least I’ll have a prescription waiting for me at the local pharmacy.

Maybe today isn’t the best day to cook. A few gesture and verbal commands later, I’ve canceled my Instacart delivery and placed an order for chicken piccata at an Italian restaurant near my house. It won’t be as good as grandma’s, but I really need the rest.

Examples of future human augmentation

Welcome to the era of Homo augmentus.

Just as modern Homo sapiens distinguished themselves from their prehistoric ancestors through their expanded cognitive and physical abilities, Nokia Bell Labs believes that humanity is in the process of taking another big leap forward in cognition and physiology. This leap, however, won’t be due to evolutionary biology but rather as a direct result of technological enhancement. This era of Homo augmentus isn’t quite as distant as you think. By the time we reach the 6G era 10 years in the future,  the level of hyperconnectivity we’re witnessing today will move beyond simply linking “things” and extend directly to the human body and mind. These new advances in human augmentation will in turn drive unprecedented levels of human and economic productivity.

What’s more, the technologies of Homo augmentus will be broadly accessible, helping people solve everyday problems, manage their work and personal lives as well as manipulate the environments they live in. Science fiction has taught us to think of augmentation on a spectacular and often disruptive scale – high-tech implants fused to the body that allow people to perform superhuman feats. But the technology that augments us can function on a much subtler level. It can help us cook a meal, avoid an accident in traffic, recall a distant memory and keep us healthy.

The different types of augmentation

It’s easy to think of human augmentation as an intimate physical technology: a prothesis to replace a limb or a surgical procedure to correct a physical deficiency. But Nokia Bell Labs believes in a Homo augmentus future that reaches far beyond the confines of the human body, giving us control of robots and devices that become remote extensions of ourselves. Augmentation won’t just compensate for disabilities; it will enhance our physiologies and monitor our bodies. Augmentation will no longer be limited to physical tasks; it will increase our cognitive abilities and memory, enhancing our minds as well as our muscles. 

To that end, we can classify augmentations as either external or internal and either cognitive or physical:

  1. Internal Cognitive Augmentation: These technologies will enhance our minds. Internal cognitive augmentations will improve our mental focus and expand our memory as well as increase the speed at which we make decisions, learn new tasks and plan our daily lives. This will be accomplished by artificial intelligence intimately coupled to humans through physiological interfaces. For instance, sensor augmentations might record everything you see and hear, while an AI-powered digital assistant would sort through those virtual “memories,” extracting the right information you need at any given moment.
  2. External Cognitive Augmentation: These technologies will seek to extend the domain of control of our brain far outside our own bodies. Through physiological interfaces, these enhancements will allow us to perceive the world through external artificial sensors and control remote objects (both physical and digital) as easily as we control our own bodies.
  3. Internal Physical Augmentation: These technologies will repair, monitor and enhance the workings of our inner bodies. Eventually this technology may be able to replace any failing internal organ, but the greatest impact will be from the devices that provide continuous monitoring of our physiology. That would allow us to detect diseases and intervene before they become intractable, as well as track and manage epidemics. If we chose, these same devices could not only sense but enhance, pushing our physiology to its very limits, allowing for both super-athletes and super-safe workers.
  4. External Physical Augmentation: These technologies are the ones we most often associate with augmentation. Tools and machines that humans have used for centuries will become intimately coupled with the body. From prosthetic limbs to exoskeletons, these technologies could make us faster, stronger and more resilient. It’s unlikely that people would use powered limbs while going about their daily lives, but they would have their uses in industry and special situations.

For the most part, these types of augmentation won’t exist separately in isolation. Only by combining them can we truly augment humanity in the future. Internal cognitive augmentations will allow us to access information and draw on forgotten memories, which in turn we can act on with our external cognitive augmentations. Meanwhile, our internal physical augmentations would work hand in hand with our external physical augmentations, allowing us to perform tasks we would normally be incapable of.

Homo augmentus types of augmentation

The enabling technologies of human augmentation

To achieve this state of Homo augmentus, we’ll need to turn to a wide range of technologies, many of which are key areas of research at Nokia Bell Labs. We’ll need new ultra-sensitive sensors and new physiological interfaces for articulating our desires and actions. We’ll need edge cloud computing and AI/machine learning techniques to process and interpret the tremendous amount of information our augmentations generate in real-time. And we’ll need to create new types of networks that allow our augmentations to connect and interact with each other and the outside world.

First off, we aim to perfect the brain-machine interface (BMI). We need to find faster, more efficient and more intimate ways for our minds to relay their intentions to our physical and digital environments, beyond typing on a keyboard, moving a mouse or swiping a touchscreen. The direct neural probe is probably the most extreme example. Probes inserted into the central nervous system would give direct electrochemical access to neurons, allowing human beings to control their augmentations in precisely the way their brains control muscle movement. But it’s unlikely that the average person with full motor function would subject themselves to the surgery required for direct neural interfaces.

Thankfully there are many non-invasive ways to create a brain-machine interface using sensory and motor pathways. Our nervous system extends to every part of the body, carrying muscle control and sensory input signals to and from the brain. We can hijack these pathways to communicate with the brain. Any sensory system could be used for input: hearing, vision, touch, taste, smell. And output could be read anywhere there is a strong enough electrical signal. For instance, electroencephalogram (EEG), electromyograpy (EMG) and electrooculography (EOG) technologies could detect and transmit electrical impulses from the brain, muscles and eyes respectively. Anywhere there is detectable body motion, there is a potential input mechanism: direction, orientation, acceleration can all be used as cues. The subtle physical gestures we take for granted to communicate with other people can be captured and digitized to enable natural interactions with machines. As an example of where this work is well underway, Nokia Bell Labs Shannon Luminary Miguel Nicolelis, director of the Duke University Center for Neuroengineering, is making great strides in using non-invasive brain-machine interfaces to help paraplegics walk again.

Even an ideal BMI, however, would have no idea how you feel. Personal sensing devices today can tell you your heart rate, core body temperature, maybe even your blood pressure, but they would have no idea, for instance, that your blood is surging with inflammation biomarkers racing to stave off an unwelcome pathogen. We will need advanced biosensors that measure across the multiple modalities of our physical states. Bioelectrical and biochemical sensors can monitor our basic biological functions, from neuronal activity to basic cellular function, as the body is literally alive with electrical and chemical signals. With bioacoustics and biomechanical sensors, we can tap into the non-stop motion within our bodies. Whether it’s a heartbeat, an inhalation of air, the chewing of a delicious snack or the grind of a never-stopping digestive system, the body is literally shouting out its current state for those able to hear it.

Recently, my own research at Nokia Bell Labs has focused on creating advanced bio-optical sensors. The body is not only sensitive to light, but many of its processes can be unobtrusively probed from a distance with light. Photoplethysmography (PPG) and pulse ox are standard bio-optical sensors today, but they herald a new generation of sensors such as Optical Coherence Tomography (OCT) and photo-acoustic imaging that can probe deeper, faster and with higher resolution into human physiology. With technology stemming from Bell Labs’ telecommunication roots, we can make these traditionally medical lab-scale systems small, low-cost and even wearable.

Creating these advanced sensor and interface technologies are key steps, but an equally important is optimizing their form and material makeup. Human-made technology is very seldom compatible with humans themselves. Obtrusive, bulky and uncomfortable augmentations would only offset the benefits those augmentations bring. We need to develop biocompatible materials that integrate naturally with the human body, whether they are wearable accessories woven into clothing or implanted on or directly below the skin.

None of the innovations we’ve discussed so far would by themselves enable the oncoming wave of human augmentation. In much the same way the internet has connected billions of individual computers and devices to create something far greater than its constituent pieces, we need to build personalized networks that can intelligently connect the devices we wear while providing the low-latency, high-fidelity, private and secure communication needed to connect our physiological, physical and digital worlds. This will be the role of body area networks (BANs). These networks will aggregate sensor data from multiple sources on our persons and in our environments. They will infer a deep and accurate understanding of our physiological and psychological states and our environmental contexts, then relay real-time feedback from one system to another. In addition, BANs will provide the distributed computing and energy resources to power our devices wirelessly over extended periods of time. We won’t have individual devices so much as we’ll have a highly integrated, interdependent and persistent augmentation web.

Our augmented future, both near and far

Through brain-machine interfaces, biosensors, biocompatible materials and body area networks, coupled with emerging AI technologies and advances in 6G communications, we will be able to achieve the level of augmentation I described at the beginning of this post. We will enhance our minds and monitor our physiological states through cognitive and internal augmentation, and we will perform new physical feats and manipulate our environments through external and physical augmentations. The greatest obstacles we face in realizing that Homo augmentus future are creating the personal networks that can connect these highly heterogenous interfaces to our physiology and weaving our complex biological data into meaningful insights and actionable feedback.

But the innovation path of augmentation doesn’t necessarily stop there. In the distant future we may reach a point where genetic augmentation and biosynthetic systems fused to our biology turn us into a species much different than the humanity we know today. In that future, Homo augmentus might truly come to represent a new step in human evolution, as human physiology would have undergone similar radical changes to the shift from Homo erectus to Homo sapiens. Luckily this future is very far off.

Nokia Bell Labs and I myself believe that human augmentation will bring tremendous benefits to people and society, boosting our capacity for knowledge and leading to new gains in productivity. But we don’t believe the next phase of human existence will be towards turning humans into techno-cyborgs. Instead, unobtrusive augmentations will enable us to be more connected to the people and things we care about most and ease the stresses caused by the banalities of our chaotic modern world.  Ultimately Homo augmentus will allow us to be more human.

Michael Eggleston

About Michael Eggleston

Michael S. Eggleston received his B.S. degree in Electrical Engineering and Physics from Iowa State University and his Ph.D. in Electrical Engineering from UC Berkeley. In 2015, he joined Nokia Bell Labs in Murray Hill, NJ where he currently leads the Data and Devices Group. An optical device physicist at heart, Michael’s research has included investigation into ultra-wideband wireless technologies, solar cells, environmental sensing, optical coherence tomography, low-power optical interconnects and devices, and integrated multi-wavelength lasers. His current research interests include battery-less sensing, non-invasive biochemical monitoring, and human-machine interfaces