Today’s technology has really gone advanced where we can perform some incredible tasks with augmented reality, mixed reality, virtual reality and even controlling digital objects with our minds. It’s no secret that these technologies are going to become an integral part of our daily life in the nearest future. In the last episode of Enhancing Human Intelligence, I discussed extensively about the brain, and how it is possible to get information from and to the brain without any invasive or cochlear implants(i.e. you don’t need to drill your brain in order to get useful information from the brain). I highly recommend reading the last episode if you haven’t or read it again to refresh your memory. 😊
Hello world! I’m Victor and I am an Artificial Intelligence and Machine Learning Developer(& the Director of AI @115Garage). In the last episode of Enhancing Human Intelligence, I said I will be going into more details about the setup of the proposed non-invasive device, that aims to enhance human intelligence. But after some thoughts, I figured it’ll be much better to discuss why we need this? How is it going to enhance our intelligence? What are the various components to make it work? Assuming we have the working device, how and why should you even use it.
What is Possible Today?
Today, we’ve gone really far in seeking information from our brain, tapping into our motor neuron and even transferring motor control from one human to another. Check out the short video below to see what I mean and get a sense of some of the advancements in the neuroscience revolution.
We’ve also been able to control digital devices with our minds. Invasive methods like Deep Brain Stimulation (DBS) or Electrocorticography (ECoG) which is a type of electrophysiological monitoring that uses electrodes placed directly on the exposed surface of the brain to record electrical activity from the cerebral cortex. In layman’s term, they involve cracking open up your brain, to place electrodes that serves as a receiver which can then send useful signals.
However, we’ve been able to read similar signals without needing to drill into the brain. In fact, we can have the device into a very fanciful wearable, with the help of Electromyography(EMG) – which is primarily for evaluating and recording the electrical activity produced by skeletal muscles. Check out the video below to get more context on how this technology works.
Interfacing with devices through silent speech
My point is that we’ve been able to retrieve information from the brain without non-invasive approaches or drilling into your brain. Just imagine we were able to retrieve a human’s inner thoughts — (best to watch the video as well). We can perform a series of operations with this information like, search the web, perform actions, control digital devices, get answers to any questions on the worldwide web. What exactly can we do with this information? What are the limitations, how do we communicate the resulting information back to the user?
Introducing Sage Knowledge Graph
Before stating the stages and components involved in giving human access to unlimited intelligence, it is worth explaining one of the major knots in the system — Sage. Before going into what “Sage” is all about, let’s talk about its name; Wikipedia describes “Sage” has someone who has attained wisdom. Sage in this context is an open-source Knowledge Graph, but it’s much more than just a Graph Database that deals with linked-data alone. It comprises of extensive functionalities that might be described as a “digital brain”. Describing all the functionalities, components and what makes it revolutionary would not fit into this mini section of its introduction. Therefore, I’ll soon write out a publication that goes into details and explains what sage really is, and why it’s such a revolutionary invention.
However, I’ll give a bit of context on Sage’s components. It comprises of every other functionality any Knowledge Graph possesses, but not quite. It could be described as an “Artificially Intelligent Knowledge Engine”. It is powered by state-of-the-art Natural Language systems plus some revolutionary ideas that make it stands out from every other Knowledge Graph. (A bit on its features in the upcoming section).
Stages & Components of Enhancing Human Intelligence
In the previous publication of Enhancing Human Intelligence, I gave a brief overview of the scope on how I intend to make this happen and given various facts about the brain and the proposed device to create two-way communication between the brain & the digital world. (Please refer to that article if you haven’t already or it might be helpful to go over it again, to refresh your memory). Various components that go into the development of a device that enhances human intelligence are:
- Retrieving Relevant Information from the Brain: Information is being received from the brain when prompted (or learns when it’s being invoked), in form of “human thoughts” and converts it into a (predefined discrete or continuous) “representation”.
- Transforming Human Thoughts: Once the information is retrieved from the brain and converted into an “intermediate representation”. These “representations” are being transformed into a “programmable data structure” –that can serve as a valid data format for any program (or software). An AI system is trained to learn and understand patterns associated with the data. This might involve filtering out some noisy data that are considered “irrelevant”.
- The Sage Knowledge Graph: A light-weight embedded version of the sage knowledge graph will be deployed on this device, in which the transformed data is being read by “sage” at this stage. Sage processes these data and converts them into a “sage internal representation” which includes performing state-of-the-art natural language tasks (like semantic parsing, entity recognition, and many more). It also builds and hierarchical structure in order to understand what the original thoughts are about (questions, actions, thoughts, discussions, answers, communications, etc).
- More Processing with the Sage Engine: Sage then uses its revolutionary Query API to answer questions about entity or entities of interest or related operation. For instance, if a question is being asked like: “Who’s the last president of the United States?”. Sage understands that the entity of interest here is “Barack Obama”, and it returns all related data that exists on the Sage’s graph database which might include (height: 1.85m, nationality: American, education: Harvard Law School (1988–1991), parents: Ann Dunham, Barack Obama Sr., children:Malia Ann Obama, Sasha Obama, and many more) –in a structured format.
- Time: The Sage Project is still in its very early stage of development. As with any revolutionary system, it takes a while to move from start to finish. The project started on the whiteboard 9th of January, 2018. The planning and architecture went on for about 21 months. The first prototype was written in Cython (a C binding for Python) proved out not to be as efficient and effective as I wanted it to be. So I went back to the drawing board and decided to write the entire system in a safe Systems Programming Language — Rust. The first codebase commit was made on the 6th of October, 2019. The project estimation is estimated to be near completion by 2025, (6 years).
- Security: There have been some security concerns when I talked to my friends about this. Would our brain not be hacked? Can’t false information be sent? What if I don’t want it to access my thoughts? The answer to the last part, it only accesses your thoughts when you want it to, Just like you say “Hey Google”, or “Siri”, before they start listening to you. As for the other questions, I feel this post is getting too long and I’ll definitely explain some security concerns in the next episode of Enhancing Human Intelligence.
- Fashionable: I’ve always said I want the device to be indistinguishable from normal eyeglasses or other wearables, (i.e. not looking too techie). This becomes a challenge as many components of the system has to be crammed into the little handle of both sides of the eyeglasses. Although, I’m pretty positive that this won’t be an issue in a few years as the field of Nanotechnology is growing at a staggering rate! In addition, it doesn’t have to be eyeglasses, although I thought it’d be cool to have an eyeglass. One — because they are directly mounted to your brain. Two — the lenses can be used as a display which could come in handy for complete automation without involving any mobile device or computers.
This revolutionary device not only gives you access to any information ever existed on the world wide web within milliseconds, but it also monitors your health and serves as your personal digital assistant. You need no initial training for weeks before you start to use the base functionalities, although the functionality gets better with time as you use it more and more.
- Victor Afolabi (Artificial Intelligence and Machine Learning Engineer @lotlinx • Director of AI @115Garage)
DISCLAIMER: This content is exclusively licensed to Victor I. Afolabi. Intellectual Property (Media Ownership) is covered under the World Intellectual Property Organization (CDIP/21/INF/2, p.13, s.2.3).PS: It is important to note that a number of details have been left out or not disclosed due to obvious reasons.