- SurvAIval Insights
- Posts
- AI in plain English, for normal people. PART 2 of 7
AI in plain English, for normal people. PART 2 of 7
Social Media, Deep Learning and LLMs
Hello folks!
Last week, we saw an introduction to the alchemy of AI and algorithms. In this one, we will go through:
Social Media and AI
Deep Learning and Neural Networks
LLMs and Generative AI
And as always, at the bottom, you have a selection of the news of the week to stay updated and spark your curiosity, as well as cool tools and lessons that can enhance your life.
That’s damn right!
How are they that good at keeping you engaged on the platform?
It's not through human effort, like YouTube employees scrutinizing videos in real time to curate a personalized list. This would be humanly impossible to do and would go against the main goal of the company, efficiency.
Instead, autonomous computers with sophisticated learning algorithms work behind the scenes, processing vast amounts of data.
Here, AI, or Machine Learning, enters the picture. These terms refer to computers performing tasks that typically require human intelligence.
Think of a machine playing chess over and over, learning about the game, its own mistakes, and how to fix them. With each game, it gets better, illustrating the essence of machine learning, or AI.
A good question is: if we change the game the machine has been trained on, can the machine adapt its skills accordingly?
Let's talk about an issue I noticed while reading and watching videos online: there isn't a clear definition of what counts as AI and what doesn't. Even though it has been around for 70 years, AI is still a new domain, and we will need more time to reach a general consensus about it.
When are these machines considered intelligent? Deep Learning and Neural Networks
Some experts believe that a machine playing chess repeatedly is an example of machine learning but not AI because the computer can't use what it learns and apply it to different situations or games. Others think these systems aren't really intelligent if humans have to put in the right answers for the machine to learn.
Most people agree that machines are considered intelligent when they can learn from experience using a learning algorithm without human help.
Computer scientists define intelligence as the ability to reason, plan, and learn from experience. And most importantly, do these in a General way, not only in a narrow domain or context (e.g., chess). With these abilities, machines will surpass human intelligence, and we will refer to it as artificial General intelligence (AGI). How do we get there?
One of these learning algorithms teaches computers how to process data at a higher level, based on an artificial neural network in which multiple layers of processing are used to extract features from data.
Let’s ask a machine to find pictures of cats in a box full of random images. We will train tiny processing units distributed in layers (different groups of “neurons”) to recognise specific features of a cat.
The first layer identifies cat shapes. Subsequent layers focus on fur, then feline eyes, and continue to analyze more complex features in deeper layers.
This method of machine learning, or AI, is called Deep Learning, which is deeply related to Neural Networks (called like this because it tries to mimic how humans learn things), and it is what is used in creating the magic of our friend ChatGPT.
Large Language Models and Generative AI
Before delving into Large Language Models (LLMs), let's touch on Generative AI.
One thing to keep in mind is that ChatGPT is a special case of LLM, which is a special case of Generative AI, which is a special case of using machine learning generally, which is what most people mean by AI. Still here with me?
Traditional AI can analyze data and tell you what it sees, but Generative AI can use that same data to create something entirely new. It generates new content from different sources, such as audio, video, text, code, and images, which the computer may not have previously encountered. By the way, generative is the "G" in ChatGPT. Here we go.
In 2005, Google Translate appeared in our lives, and we didn't know it was a certain type of Generative AI. English text comes in; desired language text comes out.
Siri (2011) is another example of Generative AI. We ask something, and Siri talks back.
And did you see that when we are writing an email, there's an automatic compilation of words that saves us time? Again, this would be Generative AI and it is the same as LLMs!
Language modelling and Large Language Models are systems with this goal in mind: "I have some context; I will predict what comes next".
Training a system with lots of data enables it to identify patterns in that data, which it then uses to predict words when new information is introduced.
Imagine yourself on one of those TV programs where you try to guess what words go in the blanks of an incomplete sentence. Scientists train the Neural Network to fill in missing words, thereby teaching it the fundamental aspects of language such as meaning, grammar, and syntax.
LLMs operate similarly, but with a focus on predicting the final word in a sentence.
The main two components are:
These models are fed with vast amounts of organised Data Sets, or corpus of information (large training data sets), enabling them to build word sequence probabilities.
Thanks to the Architecture of a Neural Network (an AI deep learning system), it will begin to build probabilities for given sequences or combinations of words that might appear. Using that, you can then make a prediction (the solution to the problem) for the most likely next word in a given sequence, and including the new word, we have a longer sequence. Word by word, word by word, you build the most likely sentences, paragraphs, and so forth.
Well, let’s digest this content so we can survive another week!
Next week, we will look at the origins of the current hype and the future of AI beyond language.
Thank you so much for your time.
My News Picks
OpenAI expands their safety team reflecting a growing awareness of the potential risks of AI-advanced technologies.
2024 is set to be the largest election year ever (UK, US, and India), unprepared for the influence of AI.
Balaji Srinivasan on Polytheistic AI, Human-AI Symbiosis, and Prospects for AI Control
Tesla and its robot Optimus - Gen 2
Tools
Midjourney introduces an alpha web version and Midjourney 6.0 to create amazing images.
Audio Note transforms spoken ideas into structured text, offering formats like journal entries and tweets.
Audiobox released by Meta, creates sound effects from text.
Roast My Web is the fastest way to improve your website.
And that’s all. I hope these insights, news, and tools help you prepare for the future!
Have a really nice week.
Stay kind.
Rafa TV
Social Media: A Basic Yet Effective AI System