AI In Plain English, For Normal People. PART 3 Of 7

The current AI buzz, and The Future of AI: Beyond Language

Happy Holidays to all of you!

We continue with the best guide to understand AI and don't get behind in the future!

Last week, we talked about AI and social media, Deep Learning, and LLMs. In this one, we will go through:

  • The current AI buzz

  • The Future of AI: Beyond Language

And as always, at the bottom, you have a selection of the news of the week to stay updated and spark your curiosity, as well as cool tools and lessons that can enhance your life.

That’s damn right!

(aprox. reading time = 5 minutes)

Where's the current buzz about AI coming from?

AI has been around for quite some time, but where's the current buzz coming from?

Over the last decade, interest in AI and deep learning has significantly increased, driven by major advancements in computer science and the use of large data sets. So now the system can find patterns—of patterns, of patterns, of patterns, of words—delving deep into the intricacies of language.

This 'alchemy' enables models to generate the texts we see today. 

Besides, the more parameters we have in the Neural Networks, the more different tasks the models can perform. One of these tasks is the self-supervised learning of the system, a real breakthrough in autonomous deep learning.

Transformers are another reason why these models are so powerful. Not the ones you are imagining, though, but those underlying the "T" in ChatGPT. The Transformer Neural Network Architecture.

Think about it as different blocks of neural networks that work together to solve the task; predict the next word.

To understand why transformers are a leap forward in AI, let's see how words change their meaning depending on their position in a phrase.

What really matters is the role a word plays in its specific context.

"I saw a bear in the forest!"

"I can't bear the image of it!"

Before 2017, AI would work in a one-directional way. We ask for the word "hello" in Catalan (input), and the system will access a closed data set and give you the output, "hola."

Instead of processing only one word, the transformer architecture will process a sequence of them.

The current LLMs integrate in their decision-making to solve the problem (predicting the next word) the context of the whole phrase and its different possibilities, comparing the words of the sentence between each other instead of uni-directionally to the closed data set.

Think about it like this: We have been operating along the Y-axis (a single dimension), and now we have incorporated the horizontal X-axis into the mix.

The Transformer Neural Network Architecture signifies a monumental jump in AI's future progress and innovation. 

The Future of AI: Beyond Language

As Daniel Kahneman outlines in "Thinking, Fast and Slow” our brain operates in two different modes, or systems:

System 1: The quick, instinctive, automatic part that knows 2+2 equals 4 without calculation—the information is "cached" within.

System 2: The one that engages with rational, slower, decision-making intelligence to work out a problem in your head and get an answer, like calculating 13 x 27.

LLMs are like system 1, the one that knows what is 2+2, or predicts the next word. It seems like it can reason and learn from his experience, but that doesn't mean it has a complete understanding as a whole of what it is doing or wisdom.

LLMs work like methods of poetry, where you write quickly and freely without thinking about what you're writing or its meaning. It's an automated flow of consciousness, laid out word by word, like assembling a train track.

I'm not saying this isn't incredible because it's like the quick-thinking part of our brain, but that is only one part of human intelligence. Now, we are in the process of seeing how machines are learning from the input form of text.

Most of what we know about the world is not reflected in language. We didn't start using language properly until we were 5 years old. Before that, we intuitively learned about physics. We looked, listened, touched, and smelled in the real world and slowly understood it in an intuitive manner, like animals do.

We're developing systems enabling machines to learn as humans do, employing diverse types of intelligence and understanding the physical world (vision, sound, smell, touch, etc.).

The next stop in this journey is machine learning via video input.

The end goal of this journey is to teach machines to make decisions and predict the potential consequences of them, giving them the ability to plan to complete an objective.

AI still doesn't have the capacity to think as humans do, in a multidimensional or general way, with conscious intelligence. When machines reach this level of intelligence, we'll witness the emergence of Artificial General Intelligence (AGI), capable of generalizing knowledge across various domains.

THAT is going to be a moment to remember, like when Jesus Christ came to earth.

Next week, we will dive deeper into the mystery surrounding our friends, the chatbots, or LLMs.

Thanks for your time

My News picks

Cool Tools

  • Inbox Zero helps you clean up your inbox in minutes.

  • Jellypod converts email subscriptions into personalized daily podcasts.

Educational

  • 100DaysofAI helps you learn AI skills in 100 days with daily bite-sized lessons.

And that’s all. I hope these insights, news, and tools help you prepare for the future!

Have a really nice week.

Stay kind.

Rafa TV