How can technology revolutionize inclusion and safety in the workplace? As part of our series on generative artificial intelligence (gen AI), we sat down with Olivier Oullier from our partner, Inclusive Brains, to explore how Prometheus BCI, the adaptive human-machine interface (HMI) developed as part of our partnership, does just that.
The brainchild of neuroscientist-turned-AI entrepreneur Professor Oullier and cybersecurity and AI expert Paul Barbaste, Inclusive Brains is a French start-up with a vision for a more equal future. Oullier and Barbaste have developed Prometheus BCI, an innovative HMI that harnesses multimodal gen AI to enhance autonomy for people with disabilities for starters, and then help improve physical and psychological safety in the workplace for everybody.
By combining Allianz Trade’s global reach with Inclusive Brains’ expertise, we aim to make this technology accessible to a wide audience. The goal is to set new inclusivity standards and safety in both the workplace and society thanks to innovations that can benefit everybody – without discrimination.
Can you introduce yourself and tell us more about the path that led to Inclusive Brains?
I’m a neuroscientist with over 25 years’ experience in behavioral and brain sciences applied to developing and commercializing brain-computer interfaces (BCI) that optimize how humans use and collaborate with machines. My focus is on how people interact and make decisions, whether with humans, machines or digital environments. I’ve also worked extensively on the societal impact of neurotechnologies and AI, in terms of both ethics and regulation.
My journey took me to the World Economic Forum, where I led the global health and healthcare initiatives, and to leading the French Prime Minister’s neuroscience and public policy program. This contributed to France becoming the first country to explore neurotechnology regulation by adding sections on functional neuroimaging to its bioethics laws. Later, I became the President of EMOTIV, the California-based global leader in portable neurotechnologies. During my time there, we launched the first brain-sensing earbuds, a global platform to collect brain data at scale, and new research-grade portable brain sensors (which are both used to train our Prometheus BCI system).
Today, at Inclusive Brains, we’re combining gen AI and brain technologies to empower people with physical and/or cognitive impairments to enter or re-enter the workforce and we also help organizations promote physical and mental health of their employees with this assistive tech.
Why did you start Inclusive Brains?
Through my work as a neuroscientist and in product development, I realized over time that a lot of assistive technologies are standardized, or one-size-fits-all. They usually offer defined functionalities, and it’s up to the people who use them to learn the user interface and how to activate and control the devices. However, if a user is not able to perform the movement or sound needed to activate a function, then some devices become useless.
I believe that assistive technologies, and machines in general for that matter, must adapt to the user, rather than the other way around. This became one of my “guiding ideas.”
In 2017, a pivotal moment helped lead to the creation of Inclusive Brains. My friend Rodrigo Hübner Mendes is a leader in inclusion for people with disabilities – he has helped more than 1.5 million people to date – and himself has quadriplegia. Thanks to EMOTIV technology, he drove a real Formula One (F1) race car on a real racetrack – using only the power of his mind!
It was an incredible feat that garnered lots of attention worldwide, including from my future Inclusive Brains co-founder Paul Barbaste. He reached out to me after seeing a video of Rodrigo and me challenging none other than F1 legend Lewis Hamilton to a mind-controlled car race. From our first conversation, Paul and I began to envision how we would turn one-off research projects like the mind-controlled F1 car into practical, everyday solutions. Our aim was to help people with severe sensorimotor impairments regain some autonomy.
That’s our goal with Inclusive Brains – to create adaptive human-machine interfaces for daily use. We train our gen AI models with multimodal neurophysiological data – brain signals, facial expressions, eye tracking, heartbeats, voice intonation and movement – to make sure interfaces can adapt to each user’s unique abilities and skills in real time.
Can you share some real-life applications of this technology?
The core idea of the technology behind Prometheus BCI is adaptability, a trait that defines human intelligence and is essential for moving through the world with autonomy. Our product is both hardware-agnostic and highly versatile. This means it can be added to existing connected off-the-shelf devices such as wheelchairs, phones or workstations to enable those with very different needs to navigate the world.
We are very proud that Prometheus BCI was used to achieve two world premieres earlier this year. One took place at the United Nations’ AI for Good Summit, where Prometheus BCI was the first ever multimodal noninvasive HMI used to write and send a tweet. The recipient was no other than sitting French President Emmanuel Macron, who replied and congratulated our team for our achievements. The other was a feat by Nathalie Labrégère and Pierre Landerretche broadcasted globally as France was about to host the biggest sport event of the year.
Nathalie has severe physical and cognitive impairments. There are a limited set of movements she can execute, and she cannot usually do them on demand and in a controlled fashion. Traditional brain-computer interfaces used to mind-control a device require mastering one’s stress and attention levels, which wasn’t an option for her. But we adapted Prometheus BCI to detect two facial expressions she could control: blowing a kiss and smiling with an open mouth. In combination with her brainwaves, these were then mapped to control the actions of an exoskeleton, enabling her to interact with her environment. She was able to extend her arm and reach out to a torch thanks to Prometheus BCI, something she is sadly unable to do without technological assistance.
In contrast, Pierre, who had full cognitive ability but with sensorimotor impairments, could operate the arm exoskeleton using more “traditional” mental commands that are motionless, silent and touchless. This is what is often referred to as “mind controlling” a device.
Solutions like Prometheus BCI, which are tailored to each user’s physicality and cognition, are therefore are not one-size-fits-all. Adaptive technologies are essential to closing the equality gap for people with disabilities.
How does gen AI, and multimodal learning, boost workplace inclusion?
To give a concrete example, someone diagnosed with limited attention capacity might struggle with prolonged professional tasks. This does not mean that this person is unfit for the job. Maybe they just need assistance from an adaptive technology.
Since our solutions can monitor and adapt to variations in focus, cognitive load, stress and fatigue in real time, they help mitigate some performance limitations due to neurodivergence and boost productivity while preserving one’s psychological safety and physical health. This is literally changing the game for both workplace inclusiveness and safety. Also, let’s not forget that tech could also help mitigate the stigma against people wrongly perceived as “different” by their colleagues.
True multimodality of gen AI was key for us to achieve these goals. For a lot of people, gen AI equals text generation. But when it comes to human-human interaction or human-machine interaction, there is so much more than words – think gestures, facial expressions, intonation, bodily signals such as changes in movements because of fatigue or stress! The list goes on.
That’s why at Inclusive Brains we train our gen AI models with various types of data – or modalities – to mimic the human brain’s functioning and learning processes as closely as possible. We are working on a Large Cognitive Model, but we are also developing specialized AI agents, each excelling at a specific task. For example, we have one that focuses on facial expressions and another for brain data.
Although each agent is specialized, what’s interesting is that we train them together. This means they’re really adaptable - working efficiently both solo and in combination with other agents, either fellow specialized agents or supervisors that optimize agents’ teamwork.
The benefits are not only in the efficiency of our solutions per se. Multi-agent solutions like ours are truly frugal-by-design. They are a lot less demanding when it comes to computational power and energy. This means they can work locally without the need to deport computation in the cloud, therefore at a lower operational cost and with a significantly smaller carbon footprint.
True multimodal training makes gen AI more versatile and capable of deeper, context-aware understanding, essential for handling complex real-world situations and provide machines with adaptiveness capacities. By adapting to users’ uniqueness and needs, AI can provide tailored solutions that accommodate individual challenges, fostering equal opportunities for success.
What does the future of Inclusive Brains look like?
Going forward, we’re focused on expanding our impact to benefit everyone, and not only people with disabilities, with zero discrimination – that’s the essence of truly inclusive tech. We started by creating tools for people with disabilities to succeed in the workplace, but our vision has always been broader. We’re inspired by innovations like the remote control, which was originally designed to help people with disabilities but ended up being a commercial success adopted by billions of people that revolutionized how we all interact with technology.
Our ultimate goal is to achieve true digital inclusion. We recognize that as individuals – with various abilities – our needs evolve over time. The technology we create must adapt to these changes, offering inclusive, accessible tools for all.