This article has been written by Artur Meyster. Artur is the CTO of Career Karma (YC W19), an online marketplace that matches career switchers with coding bootcamps. He is also the host of the Breaking Into Startups podcast, which features people with non-traditional backgrounds who broke into tech.

The term Artificial Intelligence is thrown around a lot these days. But, far too often it is a misnomer, whether intentionally or due to ignorance. While it is used as a bit of a catch-all phrase, it is important to know the difference between technologies that sound alike, but are not.

To truly understand the various levels of technology that mimics the processing of information similar to the human brain, we should first examine the different levels of intelligence between sentient and sapient living creatures. In fact, sentient and sapient are two other terms that often get used incorrectly when discussing the potential for artificial intelligence to obtain human-level intelligence.

SentienceThis is the ability to think, feel, and perceive subjectively. Scientists believe that most (or all) animals have various levels of sentience. Recent research has given some evidence that perhaps even some plants, fungi, and slime molds may also have various levels of sentience.

SapienceThis is human-level intelligence where a being has wisdom and a profound ability to perceive the world around them. As far as we understand, humans are the only creatures with this level of intelligence. However, there are ongoing debates about whether some animals may possess levels of intelligence that come very close to our own.

The Four Levels of Machine Awareness

Machines have not yet reached the level of sentience nor sapience. However, it is generally agreed upon that once artificial intelligence does reach sentience, sapience won’t be far beyond. Once the machines hit sapience, they will begin to surpass human intelligence exponentially, which is commonly referred to as the Singularity.

We can measure their advancement by observing the four levels of machine awareness:

  • Type I: Reactive Machines – These machines are simply reactive and do not possess the ability to form memories or make informed decisions. Compare this level of intelligence to an old calculator.
  • Type II: Limited Memory – This type of machine is a little more “intelligent” as it forms, holds, and recalls limited memories. Compare this level of intelligence to your smartphone or laptop.
  • Type III: Theory of Mind – They can form understandings about the world around them. Compare this level of intelligence to an autonomous vehicle. It must recognize and interact with the world, such as avoiding objects or making a “decision” to brake or turn. These machines are considered to be on the cusp of sentience.
  • Type IV: Self-Aware – This is where machines become sapient. At this time, we do not yet fully understand exactly how and where human consciousness is formed. Therefore, it is impossible for us to know when machines will become self-aware.

Now that we better understand some of the levels of intelligence of both humans and machines, we can better identify the stage of each technology. In modern society, we have become more comfortable with Types I & II. However, Types III & IV are approaching at an almost impossible to predict speed. Even though many experts in the field try to put a timeframe on when the singularity of artificial intelligence may arrive, outside influences such as brain-computer interfaces (which are discussed below) could accelerate this pace.

Glossary of Terms for Intelligent Machines

Let’s get into the nitty-gritty of the various terms that are used to describe intelligent machines. As the industry progresses, there will undoubtedly be new terms that emerge, but these are the most commonly used at the time:

  • Neural Net / Neural Network – A group of nodes that are interconnected, resembling the connection of neurons in the brain. Each node stores a different set of information that the program will pull from to perform a task.
  • Machine Learning – Using statistics to “teach” a program on how to “learn” from the data. The main purpose is to allow the program to arrive at its own conclusions without the solutions having been explicitly programmed into the system.
  • Deep Learning / Deep Structured Learning / Deep Neural Learning / Deep Neural Network / Hierarchical Learning – These terms are used interchangeably to describe machine learning of unstructured datasets. Basically, throwing in a lot of information from numerous sources with no structure. This is in contrast to task-specific algorithms that are often applied to standard machine learning, such as training a program on specific, structured datasets.
  • Intelligent Retrieval – The process of pulling information from a database in a structured manner. The most common example of this technology is most of the chatbots that exist today.
  • Electronic Brain / Artificial Brain / Artificial Mind – The hardware, software, and firmware components that make up an artificial intelligence network.
  • Cybernetics – Refers to the study of the structures relating to the automatic control systems and communication processes in both machines and living organisms.
  • Strong AI – Another term for artificial intelligence that reaches the level of humans. i.e. the singularity.
  • Autonetics – Refers to the development and use of machines for automated control and guidance of devices. Autonomous vehicles use automatics to interact with the world.
  • Natural Language Processing (NLP) / Natural Language Generation (NLG) / Natural Language Learning (NLL) / Natural Language Interpretation (NLI) / Natural Language Understanding (NLU) – These terms are to describe how machines “learn” from and interact with humans using human languages as opposed to algorithmic commands. Some of the more sophisticated chatbots employ these tools, whereas less sophisticated chatbots employ intelligent retrieval, as mentioned above.
  • Affective Computing / Artificial Emotional Intelligence (Emotion AI) – Refers to teaching machines to understand, learn from, and interact with human emotions.
  • Expert Systems / Cognitive Expert Advisor / Cognitive Computing – Machines drawing from specialized and expert knowledge to perform difficult tasks such as diagnosing a disease or trading stocks. Though it is often referred to as having artificial intelligence, these mostly operate on “if-then” algorithms.
  • Virtual Personal Assistant / Digital Assistant – Sometimes these terms are used to describe a human who works remotely providing assistance, however, they are also used to describe programs such as Alexa, Google Home, and Siri.
  • Synthetic Intelligence / Artificial General Intelligence (AGI) / General Purpose Machine Intelligence – Used interchangeably to describe AI. However, these terms place a heavy emphasis on machines possessing “true” or “pure” intelligence such as with the singularity.
  • Weak AI / Narrow AI – This is the most common type of AI we see in use today. The program focuses on specific areas and cannot demonstrate broad knowledge in other areas.
  • Super Intelligence – Refers to AI that possesses intelligence far superior to humans. It is this form of intelligence that concerns many experts about the potential for AI to enslave humanity.
  • Technological Singularity – Not exactly the same as the AI singularity. This refers more to the point where AI and other technology will accelerate so intensely that it will completely change the way humans live. This could also refer to a point on the Kardashev Scale where humanity becomes a Type I, II, & III civilization. Currently, humanity still has not reached Type I, but we are close.
  • Artificial / Machine / Synthetic Consciousness – Although used interchangeably with a few other terms, intelligence and consciousness are not the same things. Neither is artificial and synthetic consciousness. So, while intelligence has varying degrees, consciousness in machines and in humans is something we are still trying to figure out.

Terms That Should Not Be Confused with AI Terminology

Some of these terms may be confused with AI terminology, so it is important to understand the differences. These terms deal with humans and how we merge or interact with intelligent machines, not the machines themselves.

  • Brain-Machine Interface (BMI) / Brain-Computer Interface (BCI) / Mind-Machine Interface (MMI) / Direct-Neural Interface (DNI) / Neural-Control Interface (NCI) –  These terms refer to the various ways that machines interact with a living brain. They can receive input or output information directly to and from the brain. Some may be invasive, such as neural lace (discussed below), or a larger device such as Neuralink, which means they are implanted in direct contact with the brain. Others may be external, such as EEG machines which use a cap fitted onto the subject with nodes that interact with the brain.
  • Neural Lace – An ultra-thin mesh that can be implanted into the skull to monitor brain functions. This mesh can both send to and receive information from the external brain-computer interfacing machine.
  • Knowledge Engineering – This refers to a human engineer who builds the framework for knowledge-based systems.
  • Computational Thinking – Describes how computer scientists plan out their thought process to mimic a computer process of formulating a problem and figuring out the solution. It uses four processes: decomposition, pattern recognition, pattern generalization, and algorithm design.
  • Augmented Intelligence – Another term that is sometimes misidentified as pertaining solely to machine intelligence. Instead, it is using technology to boost human (or animal) intelligence. 
  • Wetware – Similar to hardware, software, firmware, except that it is made of organic matter, like our brains. It can be natural or synthetic organic matter, as long as it is not referring to a “brain” made of plastic and metal.

Where Do We Go From Here?

Although it may be a little overwhelming, the terms above are really only a small fraction of the technical terms that business and industry leaders should become familiar with to help guide you to make wise decisions in planning your business’ digital transformation strategy. This knowledge will also empower you to avoid being sucked into the hype with buzzwords slapped onto substandard technologies.

It sometimes feels as though it is nearly impossible to stay up-to-date with the latest technological innovations as new advancements are announced constantly. But, there is a way to stay ahead of the curve and avoid being lost in the ocean of information available out there. Stay informed of new technology to mobilize your team and devise a plan for the disruption they may cause and how you can incorporate them into your strategy before your competitor does.

To give you some clarity on where technology is headed, let’s explore some fascinating examples of what’s around the corner, how they can disrupt industries, and how you can start planning now to take full advantage.

Quantum Computing

If you’re only vaguely familiar with quantum computing, it is one technology that you should definitely keep a close eye on for new developments. In order for artificial intelligence to become as strong as many experts predict and allow for the connectivity needed to deliver on the promises it holds, we will need to develop quantum computing to a level where it can be applied commercially.

And, once it reaches that level, you had better be prepared in advance to implement this technology as soon as possible. It is neither hyperbole nor hype to say that if you’re not planning today, your business will be left in the dust. It will literally disrupt every industry and turn them upside-down. Quantum computing will force us to rethink and possibly reinvent cybersecurity strategies, make processing big data much simpler, and enable better connections to the Internet of Things.

Digital DNA for Data Storage

Researchers have discovered that we can encode digital information onto synthetic DNA strands for storing vast amounts of data in a very, very small space. You may wonder why this is a significant development for the world. The answer is quite simple, actually. Our current methods of data storage simply cannot handle the amount of data produced on a daily basis.

The world is projected to soon generate 463 exabytes of data per day. How much is one exabyte? 1,000,000,000,000,000,000 (1 quintillion) bytes, 1,000 petabytes or enough information to fill about 2,000 cabinets in a 4-story data center the size of a city block. Now, picture 463 of those data centers needing to be built per day to house the data we produce and store with current storage methods. It is obviously impractical.

So, scientists have devised a way to store 215 petabytes (215 million gigabytes) in one single gram of synthetic DNA. Another significant advantage of this storage method is that it won’t degrade like our current methods, nor will it ever become obsolete. It’s not the only innovative way that science and technology are merging to rise to the challenges we are facing with the adoption of new technologies. Other methods include digital metabolomes, manipulating single atoms, quartz crystal coins, and many more options that will soon revolutionize the way we store and retrieve big data.

While it may seem somewhat impractical at this point in time to encode massive amounts of data onto synthetic DNA or the other options, companies are already working on ways to spearhead solutions to make it automated and drive the cost down. Microsoft has partnered with the University of Washington to create the first generation automated DNA storage system. It’s only a matter of time until you may see your machine learning algorithms devouring synthetic DNA to garner massive datasets.

Accelerated Education Programs

The days of requiring a traditional education to prove one’s expertise in various fields are beginning to dwindle down as pioneers of innovative learning technology find more ways to accelerate the process. Through the technology of today and tomorrow, we will soon see education transformed in ways we thought were only science fiction.

In the past few years accelerated technology training programs known as coding boot camps have turned the educational world upside-down. By allowing students to focus their timeless on general education and more on gaining practical skills to get to work innovating various areas of technology faster, they have helped to accelerate the pace of technological innovation.

Higher education institutions have found they have no choice but to adopt these accelerated learning models to keep pace with the growth of the coding Bootcamp industry. Not only are these boot camps suitable for those who are new to technology fields, but they are also helping tech professionals to further their education, and CEOs of businesses gain tech skills they need to effectively manage their digital transformation strategies.

It will be interesting to see how accelerating learning programs advance once artificial intelligence, Quantum Computing, and technologies like neural lace/brain-machine interfacing become more mainstream. We could soon see a day where information is uploaded to and downloaded from the cloud directly into the brain at lightning speeds. This may sound like science fiction, but governments around the world are working hard to make it a reality.

Conclusion

The next decade holds a lot of promise for innovation in ways we can’t even imagine yet. It is an exciting, yet slightly perplexing time to be a business owner. Knowing when to spot hype and wade through it to find the gems of wisdom you need to stay ahead of the competition can be difficult.

However, with each new technological advancement comes a new opportunity for you to rise above the rest. Embrace a new paradigm. Not only will there be more than one path for you to take, but there’s also a wider variety of paths, each with their own advantages and liabilities.

Each approach offers new solutions, new ways to approach your goals, and new opportunities for greater success. Of course, there’s a fine line between achieving success and the danger of failure; it’s the line that separates a two-year-old from a fifty-year-old.  You don’t want to cross that line, but the sooner you start, the sooner you’ll be able to say that you were there when it happened.