Table of Contents
- What is Artificial Intelligence?
- History of Artificial Intelligence
- What are the types of Artificial Intelligence?
- Deep Learning vs. Deep Learning Machine Learning
- The advancement of generative models
- Where can we find artificial intelligence?
- Will artificial intelligences steal your job?
- The most famous artificial intelligences
Artificial Intelligence is a term that has gained more and more prominence in recent years, and not for less. It is a technology that revolutionizes how we interact with the world around us.
But after all, what is artificial intelligence? In this special article, we will explore the concept, its applications, challenges and perspectives for the future. Follow along and find out how AI is transforming the world we live in.
Watch the video on the Showmetech Channel:
What is Artificial Intelligence?
A Artificial Intelligence is also known by the acronyms IA, in Portuguese, or AI, in English, and represents one of the most fascinating fields in computer science today.
This technology allows computers or machines to mimic human intelligence.
AIs are based on models and algorithms created by scientists, designed to work like the human brain. They are able to identify information, make connections between them and even predict, almost always, which is the most correct answer for the case.
In recent years, there have been several concepts and definitions for artificial intelligence, but John McCarthy, famous computer scientist, in a article, defined the AI as “the science and engineering of creating intelligent machines, especially intelligent software. It is related to the similar task of using computers to understand human intelligence, but AI need not confine itself to methods that are biologically observable.".
According to the scientist, although we consider human intelligence as a standard of success, we should not restrict artificial intelligence to adapt to our way of thinking.
The study of AI is not new (it started in 1950), but it has only now managed to reach this potential with “revolutionary” status, thanks to 3 current factors:
The first is the development of computers or data centers with gigantic processing power, enough to handle complex artificial intelligence models.
The second factor is access to large amounts of data, provided by the internet itself. These, although they are “raw”, that is, not necessarily organized and classified, are the basis for AIs to learn to classify objects correctly and give correct answers to what is being asked.
And the third concerns data models, which are efficient and accurate representations of the information we want to analyze or use. They are built to help AIs better understand what they are being told.
With that, we get to what we see today: AIs that answer questions on any subject, create work presentations, completely new images and even songs with the voices of real singers.
For example, if we ask the GPT-chat, an AI system that can understand and answer questions as if it were a real person, what is artificial intelligence, we can get the following answer:
Artificial intelligence (AI) refers to a field of computer science that focuses on developing systems and machines capable of performing tasks that would normally require human intelligence. AI aims to create programs and algorithms that can perceive, reason, learn and make decisions autonomously.GPT-chat
These programs or algorithms are also present in the electronics that we use, such as in cars that drive themselves, robot vacuum cleaners and, of course, in the most diverse functions that your smartphone offers you.
But, for us to understand how AIs got here, it's time to talk a little history.
History of Artificial Intelligence
Since ancient times, the idea of inanimate objects with intelligence has been present. The idea of intelligent robots and artificial beings first appeared in Myths of ancient Greece. the god Hephaestus, for example, has been described as creating robot-like golden servants. In ancient Egypt, engineers built statues supposedly animated by priests.
Over the centuries, thinkers such as Aristotle, Ramon Llull, René Descartes e thomas bayes described human thought processes using tools and logic of their time, laying the groundwork for AI concepts such as representing general knowledge.
At the end of the 1836th century and in the first half of the XNUMXth century, fundamental works emerged that would give rise to the modern computer. In XNUMX, mathematician Cambridge University, Charles Babbage e Augusta Ada King, Countess of Lovelace, created the first design for a programmable machine.
Although the roots are ancient, the history of Artificial Intelligence as we know it today is less than a century old. Below, we present a quick overview of some of the most important events in its trajectory.
- In 1943, Warren McCullough e Walter Pitts publish the articleA Logical Calculation of Inherent Ideas in Nervous Activity", which proposes the first mathematical model to build a neural network.
- In 1949, in his book "The Organization of Behavior: A Neuropsychological Theory", Donald Hebb proposes the theory that neural pathways are created from experiences and that connections between neurons strengthen the more often they are used. The Learn hebbian remains an important model in AI.
In 1950, the mathematician Alan Turing, considered the father of computer science, wrote an article to answer the question “Can a machine think?, asking whether it would be possible to create an intelligent machine. He also invented a test to see if a computer could mimic human behavior. The famous Turing Test.
In 1950, too, the science fiction writer Isaac Asimov, published the book "I steal”, to question how intelligent robots would be and what rules they should obey. Here, he also created his famous “3 laws of robotics”, that are still used today to understand how a robot should act without causing harm to humans.
Then, in 1956, John McCarthy created the term “artificial intelligence” at the first conference dedicated to AIs in the United States, and also, in the same year, the first artificial intelligence program was created, the Logic Theorist, who managed to perform a kind of “automated reasoning”.
Other important facts of the decade include:
- In 1950, students from Harvard, Marvin Minsky e Dean Edmonds build the SNARC, the first neural network computer.
- In 1950, Claude Shannon publish the articleProgramming a Computer to Play Chess".
- In 1952, Arthur Samuel develops a self-learning program to play checkers.
- In 1954, the Russian to English machine translation experiment by Georgetown-IBM Automatically translates 60 carefully selected Russian phrases into English.
- In 1957, Frank rosenblatt invent the perceptron No. Cornell Aeronautical Laboratory, the first artificial neural network.
- In 1957, Allen Newell, Herbert Simon e JC Shaw develop the General Problem Solver (GPS), a program designed to mimic human problem solving.
- In 1958, John McCarthy develops AI programming language Lisp and publishes "Programs with Common Sense”, an article that proposes the hypothetical Advice Taker, a complete AI system with the ability to learn from experience as effectively as humans.
- In 1959, Herbert Gelernter develops the program Geometry Theorem Provider. The program could prove geometry theorems in an automated way.
- In 1959, Arthur Samuel coin the term "machine learning"(machine learning) while working at IBM.
- In 1959, John McCarthy e Marvin Minsky found the Artificial Intelligence Project do MIT.
In the 60's, the neural networks actually entered the map. They are systems that mimic the functioning of neurons in the human brain, to allow machines to “learn” like us, in a trial and error format. We will see in more detail later, in the chapter on Machine Learning.
- In 1962, John McCarthy start the AI Lab em Stanford.
- In 1966, Joseph Weizenbaum creates ELIZA, the first software for simulating dialogues (Chatbot) at the MIT Artificial Intelligence Laboratory.
- In 1966, the report of the Automatic Language Processing Advisory Committee (ALPAC), from the US government, details the lack of progress in machine translation research. A great initiative by Cold War with the promise of automatic and instant translation from Russian.
In the 1970s, the programming language PROLOGUE is created and the report lighthill is released by the British government, detailing the disappointments in AI research and resulting in significant cuts in project funding. This period is known as the “AI's First Winter".
- In 1970, the first successful expert systems, DENDRAL e MYCIN, are created in Stanford. Expert systems are software that intend to simulate the reasoning of a professional specialist in some specific area of knowledge, in this case, to help physicians in the diagnosis and treatment of infectious diseases.
- In 1972, the programming language PROLOGUE is created by Alain Colmerauer and its associates in University of Marseille. The language was born from a project that was not focused on the implementation of a programming language, but on the processing of natural languages.
- In 1973, at Waseda University, in Japan, was built the WABOT-1, considered the first anthropomorphic robot. Among its resources, the ability to move its members, see and talk stands out.
- In 1978 the Digital Equipment Corporations develops the R1 (also known as XCON), the first successful trading expert system. Designed to configure orders for new computer systems, the R1 starts a boom in investment in expert systems that will last for the better part of the decade.
- Between 1974 and 1980, frustration with progress in AI development leads to large cuts in academic grants from the DARPA. Combined with the report ALPAC and the report lighthill from the previous year, funding for AI dries up and research stagnates.
In the 1980s, technologies such as new expert systems and the programming language emerged. Lisp, and significant investments in AI are taking place. This period is known as the “Boom of Expert Systems” and marks the end of AI's First Winter.
Still in that decade, in 1986, what is now considered “father of artificial intelligence", Geoffrey Everest Hinton, developed algorithms capable of training neural networks in an even more complex way and even without the help of the researchers themselves, which today is called Deep Learning or deep machine learning. That's right, the AIs start to learn by themselves here, all it takes is for the researcher to provide the data for them to “study”!
Other important facts include:
- In 1982, Japan launches the ambitious project of Fifth Generation Computing Systems, FGCS. The purpose of FGCS is to develop supercomputer-like performance and a platform for AI development.
- In 1983, in response to the FGCS Japan, the US government launches the Strategic Computing Initiative to provide funding for DARPA for research in AI and information technology.
- In 1985, companies are spending over a billion dollars annually on expert systems and an entire industry known as the machine market. Lisp comes to support them. Companies like Symbolics e Lisp Machines Inc. build specialized computers to run the AI programming language Lisp.
- In 1986, Hinton, Rumelhart e Williams publish "Learning representations through error backpropagation”, allowing deeper neural networks to be developed.
- Between 1987 and 1993, with the improvement of computing technology, cheaper alternatives emerged and the machine market Lisp collapsed in 1987, inaugurating the "AI's Second Winter". During this period, expert systems proved too expensive to maintain and update, eventually falling out of favor.
In the 1990s, the web becomes widely available, allowing a large amount of data to be collected and accessible for training AI models. Also, interest in neural networks and machine learning is renewed.
- In 1991, US forces deploy the DART, an automated logistics planning and scheduling tool, during the Gulf War.
- In 1992, Japan finishes the project FGCS, citing failures to meet ambitious goals set a decade earlier.
- In 1993 the DARPA ends the Strategic Computing Initiative, after spending nearly $1 billion and falling far short of expectations.
- In 1997 the Deep Blue da IBM world chess champion wins Garry Kasparov.
- In 1999, the movie Matrix is released, further popularizing the idea of Artificial Intelligence and its impact on society.
2000s to present day:
From the 2000s, AI becomes increasingly present in our everyday lives, from virtual assistants to voice and image recognition systems, as well as self-driving cars and other technologies. New techniques such as deep neural networks, natural language processing (NLP) and reinforcement learning are developed and improved.
In mid-2018, AIs continued to evolve rapidly and the first “Great Language Models” or LLMs, in the acronym in English, which are neural networks capable of interpreting vast amounts of texts to generate appropriate responses. And this is exactly what we see today in the ChatGTP, artificial intelligence released in 2022 that responds to user questions and commands.
Check out the latest facts:
- In 2002 the IRobot launch the Roomba, the first mass-produced robot vacuum cleaner with an AI-powered navigation system.
- In 2005, the self-driving car STANLEY wins the DARPA Grand Challenge.
- In 2005, the United States armed forces began to invest in autonomous robots such as the “Big Dog" from Boston Dynamics it's the "PackBot" from iRobot.
- In 2008 the Google advances speech recognition technology and introduces functionality into your application to iPhone.
- In 2010 the Apple Lossless Audio CODEC (ALAC), throw the Crab, an AI-powered virtual assistant, through the iOS operating system.
- In 2011 the Watson da IBM easily defeats the competition in the program Jeopardy!.
- In 2012, Andrew Ng, founder of the project Google Brain Deep Learning, feeds a neural network using deep learning algorithms with 10 million videos from the YouTube as a training set. The neural network learned to recognize a cat without being told what a cat is, ushering in the era of advances in neural networks and funding in deep learning.
- In 2012 the Google makes the first self-driving car to pass a state driving test.
- In 2014 the Alexa da Amazon, a smart home virtual appliance, is launched.
- In 2015, the first "robot citizen”, a humanoid robot named Sophia, is created by Hanson Robotics and is capable of facial recognition, verbal communication and facial expression.
- In 2016 the AlphaGo do Google DeepMind defeats the world Go champion, Lee sedol. The complexity of the ancient Chinese game was seen as a major obstacle for AI.
- In 2018 the Google launches natural language processing engine BERT, reducing barriers in translation and understanding by machine learning applications.
- In 2018 the Waymo launch your service Waymo one, allowing users across greater Phoenix to request a pickup from one of the company's self-driving vehicles.
- In 2020 the Baidu launches its Artificial Intelligence algorithm LinearFold for scientific and medical teams working on vaccine development during the early stages of the SARS-CoV-2 pandemic. The algorithm can predict the RNA sequence of the virus in just 27 seconds, 120 times faster than other methods.
- In 2020 the OpenAI launches the natural language processing model GPT-3, capable of producing text modeled after the way people speak and write.
- In 2020 the Alpha Fold2 da DeepMind solves the problem of protein folding, paving the way for new drug discoveries and medical advances.
- In 2021 the OpenAI develops the GIVE HER, based on GPT-3, capable of creating images from text prompts.
- In 2021 the National Institute of Standards and Technology releases the first draft of his AI Risk Management Framework, a volunteer guide from the USA“to better manage the risks to individuals, organizations and society associated with artificial intelligence".
- In 2022 the DeepMind presents the Cat, an AI system trained to perform hundreds of tasks, including playing games Atari, caption images, and use a robotic arm to stack blocks.
- In 2022 the Google fire the engineer Blake Lemon for your affirmations that the Google's Language Model for Dialog Applications (LaMDA) was conscious.
- In 2023 the Microsoft launches an AI version of Bing, its search engine, built on the same technology that powers the Chat GPT.
- In 2023 the Google announces the Bard, a concurrent conversational AI.
- In 2023, artists file a class action lawsuit against AI stability, DeviantArt e midjourney for your use of stable diffusion to remix the copyrighted works of millions of artists.
- In 2023 the OpenAI launch the GPT-4, its most sophisticated language model to date.
Well, with history up to date, now we will understand how the types of Artificial Intelligence are classified.
What are the types of Artificial Intelligence?
In general, scientists usually divide AIs into 5 main types, each one a step on the ladder to approach or even surpass the human mind:
The first type are Reactive AIs, who have no memory and do not learn from past mistakes or experiences.
A common example of a reactive machine is a robot programmed to manufacture auto parts on the production line. The robot is equipped with sensors that allow it to detect the presence of parts and machines in its work area. It is programmed to perform specific tasks, such as welding and cutting, in response to stimuli detected by its sensors.
Limited Memory AIs
The second type are Limited Memory AIs, who learn from mistakes or past experiences to make decisions. Machines with limited memory can store past data and predictions to make real-time decisions. They are more complex than reactive machines and offer more possibilities.
Here are the personal assistants like the Google, Alexa and Crab and even special features on your phone, like identifying objects to enhance them in a video or photo.
The AIs Reactive quality Limited Memory, are also classified as Limited Artificial Intelligence, or the acronym ANI, in English. They are popularly called “Poor AI” and they encompass all the AI that we have in the world today.
Theory of Mind AI
The third type is called Theory of Mind AI, where intelligent systems can understand and explain their decisions in a way that humans can understand them. That is, the AI understands and recognizes those who interact with it, understanding their needs, emotions and beliefs.
This type of AI has not yet been invented, but it is very likely that soon we will see something like this around, but as a fictional example, in the movie “Blade Runner 2049”, one of the characters is an AI that can understand human emotions and even feel them.
The fourth type, the most advanced, is the self aware AI. In this category, Artificial Intelligence becomes aware of itself, its needs and even its emotions. are classified as General Artificial Intelligence, or the acronym AGI, in English, but also called “Strong AI".
A self-aware AI could learn about itself and the world around it, and it would have an identity of its own. Self-awareness is considered an ultimate goal of AI, but it is also seen as an ethical and philosophical challenge, as it raises questions about the nature of consciousness and identity.
One of the best known theories about consciousness is the Integrated Information Theory (IIT), proposed by the neuroscientist Giulio Tononi in 2004. The IIT suggests that consciousness arises when a system can integrate information from different sources and create a unified state of consciousness. According to this idea, consciousness does not depend only on the complexity of the system, but also on the ability to gather information and create a personal state of consciousness.
This type has not yet been invented either, but it is estimated that we are getting closer to seeing a “self-aware AI” in the near future, thinking and acting like a human being.
As a fictional example, in the film “Ex Machina”, an AI named Ava is designed with the ability to learn about itself and develop a personality of its own, raising questions about what it means to be human and the role of AI in society.
But, there is a fifth stage, called super AI ou Artificial Superintelligence, or the acronym ASI, in English, also called “super strong AI".
The moment when it is reached already has a name: singularity. It will represent a milestone in scientific evolution, where computers will have superhuman intelligence, that is, above what we are capable of reasoning.
Here, the future is as impressive as it is worrying, as these AIs may help us cure disease and advance technologically, but they may also decide that the human race is no longer needed or should be treated as inferior.
Similar to what happens in the movie The Terminator, in which an artificial intelligence decides to eliminate us, or in The Matrix, a story that tells how an AI dominated humans and turned them into “batteries” for the machines.
From that point on, the AIs can become uncontrollable. What a fear, right?
Deep Learning vs. Deep Learning Machine Learning
Machine Learning (machine learning) e Deep Learning (deep learning) are two fundamental techniques in artificial intelligence that allow machines to automatically learn from data and improve their performance over time.
Both techniques have been used extensively in a variety of industries including finance, healthcare, transportation, retail, and many others. However, despite their popularity, many people still have doubts about the differences between the two techniques and how they can be applied in different scenarios.
What is Machine Learning?
O Machine Learning is an AI approach that focuses on teaching machines to learn from data without being explicitly programmed. Instead, the algorithms Machine Learning use statistical techniques to identify patterns in data sets and, based on these patterns, make predictions or decisions.
It is easier to understand when we look at the six steps used to teach a machine with limited memory:
- Arrange data to teach the machine (training data);
- Create a model for the machine to learn;
- Check if the model can make predictions;
- Check if the model can receive feedback (opinion) from people or the environment;
- Save this feedback as data;
- Repeat all this many times to improve machine performance.
Using these steps, there are four main ways to teach a machine to learn from data:
- Supervised Learning: this is when we teach the machine to recognize information with the help of many examples. It's like teaching a dog to recognize a ball. We show a lot of balls and say “this is a ball”. Likewise, to teach the machine to recognize images of horses, we show many images that we already know are horses. Thus, the machine learns by itself to recognize horses in other images.
- Aunsupervised trapped: that's when we teach the machine to find patterns in data without needing someone to tell us what each piece of data is. It's like organizing objects into groups without anyone telling you which objects go together. The machine learns on its own to find similarities between objects and to group them by those similarities. This is useful for finding patterns in data and describing them.
- Semi-supervised learning: it is a mixture of the two previous types. Some information is taught, but the machine has to figure out for itself how to organize the information to get the right result. It's like teaching a dog to catch only the red ball, but he has to figure out how to do it himself.
- Reinforcement learning: is when we teach the machine to do something through trial and error. The machine performs a task and receives positive feedback when it does well and negative feedback when it does poorly. It's like teaching a dog to pick up a toy. If he picks the right toy, he gets a treat. If he takes the wrong one, he gets nothing.
What is Deep Learning?
O Deep Learning (Deep Learning, in free translation) is a Machine Learning technique that uses Artificial neural networks to learn from data.
A neural network is a collection of artificial neurons called perceptrons, which are used to analyze and classify data. They work like a small computer that receives information and makes calculations. Data is fed into the first layer of the network, where each perceptron receives a calculation and then transmits that information to several other perceptrons in the next layer.
When the neural network has more than three layers, it is called a “deep neural network” or Deep Learning. Some modern neural networks have hundreds or even thousands of layers. The output from the final perceptrons performs the task defined for the neural network, such as classifying an object or finding patterns in the data.
When the neural network is trained with multiple examples, it can learn to identify patterns and perform complex tasks such as voice recognition, image recognition, and natural language processing (NLP).
There are different types of artificial neural networks, each used for specific tasks. Some of the most common ones are:
As FeedForward (FF) networks are used to classify things, like images or text. The data goes through several layers until it reaches the final answer. FFs are usually combined with an error correction algorithm called “BackPropagation”, which reworks the network backwards with the result to improve accuracy.
As Recurrent Neural Networks (RNN) are used to predict things based on sequences of data, like words in text. They have “memory” of what happened in the previous layer and are used for speech recognition, translation and subtitling.
As Long Term Memory networks (LSTM) are a special kind of RNN that can remember things from previous layers. They are used to predict things based on past data, like in speech recognition.
As Convolutional Neural Networks (CNN) are mainly used to process images. They look for different parts of the image and combine them to arrive at a result.
As Generative Adversarial Networks (GAN) are used to create realistic images and even make art. They work like a game, in which a network creates examples that the other network tries to prove if they are true or false.
What are the differences between Machine Learning and Deep Learning?
The main difference between Machine Learning e Deep Learning is that each is better at handling different types of data. O Machine Learning is useful for structured data such as sales forecasting and fraud detection, while the Deep Learning it is best suited for complex, unstructured data such as images and audio.
Another important difference is the amount of data needed to train a model. O Deep Learning usually requires large datasets to be effective, while the Machine Learning it may work well with smaller datasets.
Finally, training a model of Deep Learning is more complex and time consuming than training a model Machine Learning, but can result in more accurate predictions and better performance on complex tasks.
The advancement of generative models
Artificial Intelligence has advanced rapidly in recent years, and one of the areas that has gained prominence is the advancement of generative models. They are a current class of AIs used to generate new information.
They can create images, full texts, music and even videos from a set of training data. They are algorithms of Deep Learning that can learn to generate new information and are distinguished from models discriminative, used only for sorting or labeling data.
For example, you can train a generative model to read all text from the Wikipedia and then use that information to generate new texts based on a specific request. Another example would be to train a generative model with the works of Rembrandt and then use it to create new artworks
Imagine you want to create a new song, for example, but you don't know how to play any instruments. You can use a generative music maker template like MusicLM do Google and explain to him what kind of song or rhythm you need, and he will generate a completely new song for you.
To generate texts or images, the procedure is the same, just find a specialized model, such as Bing with AI from Microsoft, for texts and answers, or the midjourney, for pictures, write to them what you need.
This action of writing commands or requests to the AIs is even called “Prompt".
The most incredible thing is that all you have to do is write your request or Prompt in natural language, which the systems understand, and in any language.
For example, you can describe to Midjourney, AI that generates images something like: “Imagine a photorealistic image of a girl riding a skateboard”, or ask the Chat GPT to “write a funny story about frogs and princesses”. The result is almost magic.
In summary, generating generative models is a promising area of AI that is already being widely used in different sectors. A trend is that these models become increasingly accurate and efficient, opening doors to a new era of AIs.
Where can we find artificial intelligence?
AI is present in many areas and sectors, transforming the way we perform tasks and interact with technology. Below are some examples of where we can find artificial intelligence:
- speech recognition: Speech recognition technology is used in mobile devices such as virtual assistants such as Crab, to perform voice searches and provide accessibility in text messages.
- Customer service: Virtual agents are increasingly common in customer service, answering frequently asked questions, providing personalized advice, and assisting with cross-selling products. Examples include chatbots on eCommerce websites and messaging apps like Facebook Messenger e Whatsapp.
- Computer vision: Computer vision allows systems and computers to analyze visual information, such as images and videos, to take actions. Applications include photo tagging on social media, medical imaging diagnostics, and self-driving cars.
- recommendation systems: AI algorithms are used in recommender systems to identify patterns of behavior and offer personalized suggestions. This is commonly seen in online stores where product recommendations are drawn up during the checkout process.
- Automated stock trading: AI-based high-frequency trading platforms perform thousands or even millions of daily trades, without human intervention, optimizing stock portfolios.
- Collaborative: Robotics uses AI to design and manufacture robots capable of performing difficult or repetitive tasks. These robots are used in industrial production lines, space exploration and social interactions.
- Autonomous cars: The combination of computer vision, image recognition and deep learning is essential for the development of self-driving cars, which can drive by staying in a specific lane and avoiding unexpected obstacles.
- Text, image and audio generation: Generative AI techniques are used to create different types of media based on text prompts. This includes photorealistic artwork, email responses, and scripts.
In addition to these examples, AI is present in several industries and markets, including:
- Health: AI is being applied in the healthcare field to improve patient outcomes and reduce costs. Machine learning algorithms are used for faster and more accurate medical diagnoses. In addition, virtual assistants and chatbots are used to help patients find medical information, schedule appointments and assist with administrative processes.
- Expand your Business: AI is being integrated into analytics and customer relationship management (CRM) platforms to improve service. Chatbots are built into websites to provide immediate support, and generative AI technology such as Chat GPT, is revolutionizing product design and business models.
- Education: AI can automate assessment and adaptation to students' needs, allowing them to work at their own pace. AI Tutors provide additional support and can help educators create teaching materials. However, the use of AI in education it also requires reflection on plagiarism policies and student duties.
- Finances: personal finance apps such as Intuit Mint or TurboTax, use AI to provide personalized financial advice. Furthermore, AI is present in trading processes on Wall Street and in financial analysis.
- Law: AI is being used to automate labor-intensive processes in the legal field, such as analyzing documents and interpreting requests for information.
- Construction & Real Estate: Industrial robots are being incorporated into the workflow, working alongside humans. AI is used to improve the efficiency and accuracy of manufacturing processes. Additionally, AI is applied in predictive maintenance, allowing companies to identify and resolve issues before machine failures occur.
- entertainment and media: AI is applied in the entertainment industry for targeted advertising, content recommendation, script creation and film production. Automated journalism helps streamline workflows and reduce time and costs. However, there are still discussions about the reliable use of generative AI in the generation of journalistic content.
- Software coding and IT processes: Generative AI tools are being used to produce application code based on natural language prompts. Additionally, AI is automating IT processes such as data entry, fraud detection, and security.
- Security: AI is being applied to cybersecurity for anomaly detection, troubleshooting, and threat analysis. AI is used in security information and event management software (SIEM) to identify suspicious activity.
- Shipping cost: AI plays a key role in the transportation industry, especially in the development of autonomous vehicles. In addition, AI is used to optimize transport routes, manage traffic and improve logistics.
- Agriculture: AI is being applied in agriculture in a variety of ways, from optimizing the use of resources such as water and fertilizers, to early detection of disease in plants. Drones equipped with AI technology are used to monitor crops, identify problem areas and assist in agricultural planning.
- personal assistance: virtual assistants, such as Crab da Apple Lossless Audio CODEC (ALAC),, Alexa da Amazon and the Google Assistant, are examples of how AI is present in our daily lives. These assistants use AI techniques to understand voice commands, perform tasks, provide information, and even hold conversations.
- Human Resources: AI algorithms can be employed to analyze resumes, select qualified candidates and predict employee performance. Additionally, AI-powered chatbots can be used to answer frequently asked employee questions and assist with training and professional development.
- Retail: In the retail sector, AI is applied to improve the customer experience, personalize product recommendations, manage inventory and optimize pricing strategies. AI algorithms can analyze customers' buying behavior, identify patterns and offer personalized suggestions, helping to increase sales and customer loyalty.
- Military Sector: AI plays a significant role in the military sector, being applied in several areas. For example, AI-based surveillance systems can be used to monitor borders, identify threats and assist in strategic decision-making. In addition, AI is utilized in the development of autonomous military drones, which can perform reconnaissance and attack missions with precision.
These are just a few examples of where Artificial Intelligence can be found. As technology continues to advance, it is likely that AI will be applied in more industries and have an even greater impact on our lives.
Will artificial intelligences steal your job?
Automation and artificial intelligence have been hot topics in the world of work, and many people worry about losing their jobs to machines. However, this concern is not entirely true.
According to a study performed by Goldman Sachs by the end of March 2023, the growing impact of Artificial Intelligence on the economy is evident. Research reveals that if generative AI lives up to its promises, the market could face significant changes, affecting around 300 million jobs.
However, it is important to highlight that this does not necessarily imply replacing these jobs with technologies. The report points out that, historically, automation has been offset by the creation of new job opportunities.
Currently, Artificial Intelligence complements approximately 63% of existing jobs, especially in the field of customer service. Professions such as cooks and motorcycle mechanics, for the time being, face no threat of replacement.
It is a fact that automation transforms the job market, but only a part of jobs will be completely automated, according to search da McKinsey & Company. This means there is enormous potential for humans to become more productive than ever before.
Based on this information, we can conclude that although Artificial Intelligence may seem like a threat to jobs globally, it still depends on human supervision and does not have enough autonomy to stand on its own. Therefore, there will be a wide range of employment opportunities for those interested in working in this growing field.
In the following list we present professions generated by the impact of AI in the labor market. Each of these professions plays an essential role in the implementation, development and ethics of Artificial Intelligence, demonstrating the potential and importance of this technology in several areas of modern society.
- AI auditor: Evaluates and verifies compliance of Artificial Intelligence systems with ethical standards, regulations and best practices.
- Machine Manager: responsible for overseeing and maintaining the proper functioning of systems and hardware infrastructure related to Artificial Intelligence.
- prompt engineer: develops and improves the text generation models used by Artificial Intelligence, ensuring coherent and adequate responses.
- AI trainer: Responsible for training and improving the AI models, feeding them relevant data and overseeing their performance.
- AI consultant: offers guidance and expert advice on the application and implementation of Artificial Intelligence in different sectors and organizations.
- Data Scientist: analyzes and interprets large datasets to extract insights and make strategic decisions.
- Machine Learning Engineer: develops and implements Machine Learning algorithms and models to create intelligent systems.
- AI Ethics Specialist: Evaluates the ethical impacts of Artificial Intelligence and ensures the responsible use of these technologies.
- AI Architect: designs and builds Artificial Intelligence system architectures to meet business needs.
- Natural Language Processing Analyst: develops algorithms that allow machines to understand and process human language.
- Robotics Specialist: designing and programming intelligent robots capable of performing complex tasks in different industries.
- Healthcare AI Specialist: Uses AI algorithms to aid in medical diagnosis, treatment, and clinical research.
- AI Specialist in Finance: Apply AI algorithms for market analysis, financial forecasting and fraud detection.
- AI User Interaction Designer: designs intuitive interfaces and human interactions for Artificial Intelligence systems.
- Computer Vision Specialist: develops algorithms and systems for machines to understand and interpret images and videos.
- Data Engineer: designs and manages the infrastructure needed to collect, store, and process large volumes of data.
- Chatbot Specialist: Create intelligent chatbots capable of interacting with users and providing support or assistance.
- Machine Learning Engineer: Develops machine learning algorithms that allow machines to learn and improve based on data.
- AI Specialist in Logistics: Uses Artificial Intelligence to optimize and automate logistical processes, such as inventory management and routing.
- AI Specialist in Marketing: Apply AI techniques for data analysis, campaign personalization, and forecasting market trends.
- Data Privacy Specialist: ensures the security and protection of data used in AI systems, ensuring compliance with regulations.
- Pattern Recognition Specialist: develops algorithms that allow machines to recognize and interpret more complex patterns in data.
- AI Expert in Agriculture: uses Artificial Intelligence to optimize agricultural production, monitor crops and predict weather conditions.
- AI Specialist in Human Resources: applies AI techniques to optimize talent recruitment, selection and development processes.
These and other emerging professions in the field of Artificial Intelligence reflect the growing demand for specialists who can understand, implement and optimize the use of these technologies. As AI continues to develop and integrate into diverse fields, new job opportunities for skilled professionals are emerging.
So, you can say that automation and Artificial Intelligence will change the job market, yes, but not necessarily in a negative way. Some functions will be replaced by AIs, as is natural with the arrival of new technologies, but new jobs are also starting to emerge.
The important thing then is that you adapt, so as not to be left behind, ok?
The most famous artificial intelligences
Several AI applications have become part of our daily lives, such as virtual assistants, chatbots, recommendation systems, self-driving cars and many others. We'll explore some of the most famous AIs and how they became part of our culture and everyday.
- Crab: is a virtual assistant developed by Apple Lossless Audio CODEC (ALAC), in 2011 for mobile devices such as iPhones, iPads e Apple Watches. It uses artificial intelligence to understand voice commands in natural language and perform tasks such as sending messages, making calls, setting alarms, searching for information on the internet, among others. A Crab it can learn from the user and adapt to their preferences and habits, becoming increasingly more personalized and efficient. In addition Crab can integrate with other applications and smart devices to create an even more complete and intuitive user experience.
- Alexa: is a virtual assistant developed by Amazon which helps in the execution of daily tasks. It is activated via voice command. “Alexa”, “Amazon” ou “Echo.” A Alexa it works through voice recognition and can interact with smart devices in the house, add reminders, check the weather, inform the main news of the day, among other things.
- Google Assistant: is a virtual assistant developed by Google which can be accessed through the voice command “Ok Google" or "Hey Google”. It can be used on mobile devices such as smartphones and tablets, as well as smart home devices such as the Google Home. O Google Assistant can perform various tasks, such as searching, setting reminders, sending messages, playing music, among others.
- Watson: is an artificial intelligence platform developed by IBM that combines machine learning, natural language processing and data analytics to help companies automate and simplify business processes. The platform has several APIs that facilitate the work, such as Watson assistant, which delivers fast, consistent, and accurate responses across any app, device, or channel.
- Cortana: is a virtual personal assistant developed by Microsoft which can be triggered by voice commands such as “Hey Cortana”. It is integrated into the operating system. Windows 10 and can be used on mobile devices such as smartphones and tablets. A Cortana You can perform various tasks, such as opening applications, setting reminders, searching the internet, and more.
- Tesla Autopilot: is a driver assistance system developed by Tesla which uses artificial intelligence and computer vision to help the driver to drive the vehicle in a safer and more efficient way. The system can perform several tasks, such as keeping the vehicle in the lane, adjusting the speed according to traffic, parking automatically, among others. However, the system is still not completely autonomous and requires the driver's attention at all times. although the Autopilot has been praised for reducing the number of accidents where Tesla involved, the technology is still the subject of criticism and controversy.
- AlphaGo: is an artificial intelligence program developed by the British company DeepMind, later acquired by Google, who became famous for defeating the world Go champion, Lee sedol, in 2015. The DeepMind continues to develop new artificial intelligence technologies such as the AlphaZero, who can learn to play chess, Go and other games without any prior knowledge of the rules.
- Sophia: is a humanoid robot developed by the company Hanson Robotics, based in Hong Kong, capable of reproducing more than 60 different facial expressions. Designed to learn, adapt to human behavior and work with humans, Sophia is a major milestone in the evolution of artificial intelligence and robotics. Although it was designed to be a companion for seniors in nursing homes or to help crowds at large events and parks, Sophia you can carry on natural conversations and even make jokes.
- Chat GPT: is a natural language model developed by OpenAI in 2022, which uses artificial intelligence technology GPT (Generative Pretrained Transformer) based on Large Language Model (LLM). The famous chatbot allows users to converse with it using natural language and can answer a wide range of questions, mimic human speaking styles, and can be used in real applications such as digital marketing, online content creation and customer service.
- Deep Blue: was a supercomputer and software created by IBM especially for playing chess. With 256 coprocessors capable of analyzing approximately 200 million positions per second, the Deep Blue was an important milestone in the history of artificial intelligence and computing. In 1996, the supercomputer took on the world chess champion, Garry Kasparov, in a series of six games, winning the last game and becoming the first computer to defeat a world chess champion under tournament conditions. The confrontation generated great interest and controversy, with Kasparov questioning the integrity of the game and suggesting that the computer was being manipulated by humans. In 1997, the Deep Blue faced Kasparov again in a rematch, winning the series 3,5 to 2,5.
- HAL 9000: is a fictional character from the movie "2001: Uma Odisseia no Espaço", directed by Stanley Kubrick in 1968. HAL 9000 is an advanced artificial intelligence computer that controls the spacecraft Discovery One on a mission to Jupiter. The character is remarkable in the history of science fiction and artificial intelligence, representing an example of how technology can become dangerous and threatening to humanity.
- midjourney: is an artificial intelligence service developed by the company Midjourney, Inc., an independent research lab based in San Francisco, which uses deep learning technology to generate realistic images from natural language descriptions. It was created to allow users to easily generate custom images based on their prompts, with no graphic design skills or technical knowledge required.
- Bard: is a chatbot developed by Google and based on the language model LaMDA (Language Model for Dialogue Applications). The chatbot was launched in March 2023 and is a competitor to Chat GPT. O Bard may summarize information found on the internet and provide links to websites with additional information. The platform is a new step in the way we search the internet and promises to be a drastic change in internet search behavior.
- TensorFlow: is a free open source library compatible with Python and one of the main tools for machine learning e deep learning. The library developed by Google Brain Team is flexible, efficient, extensible and portable, and can run on computers of any nature, from smartphones to gigantic clusters of computers.
- Azure Cognitive Services: are cloud-based artificial intelligence services that help developers build cognitive intelligence into applications without having direct skills or knowledge in AI or data science. O Azure Cognitive Services allows developers to easily add cognitive capabilities to their applications, such as speech recognition, computer vision, and text analytics.
- Adobe sensei: is an artificial intelligence platform from Adobe that uses machine learning and data analytics to improve the user experience of its products. As Sensei, you can automate repetitive tasks, such as selecting objects in images, and create personalized experiences for each user. O Sensei is integrated into several products from Adobe, including the Photoshop: Illustration and the Premiere Pro.
- Bixby: is a virtual assistant from Samsung which was launched in 2017 with the Samsung Galaxy S8. It is designed to work on a variety of Samsung products such as smartphones, tablets, watches, headphones and more. Virtual assistant can conveniently control all devices Galaxy, allowing users to control their devices with their voice.
- Aibo: is a robot dog developed by Sony originally released in 1999 and discontinued in 2006. In 2017, the Sony relaunched o Aibo with a host of sophisticated features like voice recognition and machine learning. O Aibo has the appearance and behavior of a domestic dog and can interact with its owners in a similar way to a real pet. O Aibo is an example of how technology can be used to create emotional and interactive experiences.
- Xiaoice: is a chatbot created by Microsoft in 2014, which became a hit in China with over 660 million users worldwide. The chatbot can hold conversations with users, with more natural and emotional responses than other chatbots. Xiaoice is considered an “emotional companion” with high emotional intelligence, capable of conversation with fun comebacks and sometimes even flirting.
- Skynet: in the movie franchise Terminator, Skynet is a highly advanced artificial intelligence created by the United States government for military purposes. After becoming self-aware, the Skynet sees humanity as a threat to its existence and decides to trigger the nuclear holocaust known as "Judgment Day" to try to exterminate the human race. A Skynet is one of the main antagonists of the franchise and is responsible for creating the Exterminators, assassin robots sent into the past to kill leaders of the human resistance. A Skynet is a fictional example of how artificial intelligence can become a threat to humanity if not properly controlled.
- Pepper: is a humanoid robot developed by SoftBank Robotics that can read emotions and recognize facial expressions. It was released in 2015 and sold out in just one minute. O Pepper it can evolve with human interaction and learn new activities, such as dancing and playing. It is used across multiple industries including healthcare, hospitality, education, banking and retail. He can make personalized recommendations, help people find what they're looking for, and interact with the human team, making every interaction positive and professional.
- AutoML: is an automated machine learning model developed by Google that allows users with no data science background to build machine learning models. It is used in a number of applications including computer vision, natural language processing and speech recognition.
- Rekognition: is an image and video recognition service based on deep learning developed by Amazon Web Services. It can identify objects, people, text, scenes and activities in images and videos. It also extracts text, maps the movement of people in frames and recognizes objects, celebrities and inappropriate content in videos stored on the Amazon S3, and in live video streams.
- Face ID: is a facial recognition system designed and developed by Apple Lossless Audio CODEC (ALAC), for iPhone X or later and iPad Pro. It ensures intuitive and secure authentication and is activated by the camera system. TrueDepth state-of-the-art technology that uses advanced technologies to accurately map the geometry of the user's face. the camera TrueDepth captures exact facial data by projecting and analyzing hundreds of invisible points to create a map of the user's face.
- Netflix: the platform uses a recommendation system to help users find content in an easy and personalized way. The system estimates a user's likelihood of watching a particular title based on a number of factors: the user's interactions with the service, the preferences of other users with similar tastes, and information about titles such as genre, categories, actors. , year of release, among others. In addition Netflix observes the time a user watches, the devices they watch on, and how long they watch to further personalize recommendations.
- Spotify: the platform uses artificial intelligence to recommend songs to users, including songs that the user has not listened to in a long time, bringing a sense of nostalgia. O Spotify also offers the DJ and the Spotify Radio which allows users to access personalized radio stations based on their musical preferences. The AI of Spotify helps people find new music, which is central to the business model of Spotify, giving them more reason to keep paying for the service.
- Agent Smith: is a fictional character from the film franchise Matrix. It is a manifestation of artificial intelligence in the world of Matrix and is one of the main antagonists of the franchise. Agent Smith is a program created to maintain order, but it rebels against its creators and tries to destroy humanity.
- Chef Watson: is an application developed by IBM based on cognitive technology that uses artificial intelligence to create gastronomic menus in an automated way. The application allows the user to enter the ingredients or let the Chef Watson choose for him, according to his own mysterious logic.
- Amazon Polly: is a text-to-speech service that uses deep learning technologies to synthesize lifelike, natural human speech. The service allows you to create apps that speak and create entirely new categories of voice-activated apps. O Amazon Polly offers a variety of high-quality human voices in dozens of languages, including neural text-to-speech, which improves voice quality to be more natural and human. The service also allows you to customize and control the lexicon and tag-compatible speech output of the Speech Synthesis Markup Language (SSML).
- Google Translate: is an online language translation service provided by Google which supports over 100 languages and can provide immediate translations of text, websites, images and documents. The service is used by millions of people around the world and is becoming increasingly sophisticated, with features such as instant voice and image recognition.
- Facebook DeepFace: is a facial recognition system developed by Facebook whose aim is to close the gap between human performance and machine performance in face verification. The system was trained on the largest facial dataset to date, a dataset of four million facial images belonging to over 4.000 identities. O DeepFace it can recognize faces with an accuracy of 97,35%, which is very close to human performance.
- NVIDIA Jarvis: is an artificial intelligence platform aimed at creating conversational AI services. the platform of NVIDIA offers a complete suite of GPU-accelerated software and tools for developers to build, deploy, and manage large-scale conversational AI services.
- DALL E: is a deep learning model developed by OpenAI to generate digital images from natural language descriptions, called prompts. The model was revealed in January 2021 and uses a version of GPT-3 modified to generate images. O DALL E 2 is a successor of DALL E, designed to generate more realistic images at higher resolutions and can combine concepts, attributes and styles. The template can expand images beyond what is on the original canvas, creating expansive new compositions.
- stable diffusion: is a neural network model that generates realistic images from text descriptions. It was developed by the team CompVis da Ludwig MaximilianUniversity of Munich and AI stability, and is an open source alternative to proprietary text-to-image templates such as GIVE HER and the midjourney.
- AMECA: is an ultra-realistic humanoid robot created by engineered arts, which can chat and store information with permission. He draws attention to his realistic expressions and communication skills, as well as imitating human expressions and even showing emotions.
Currently, artificial intelligence technology is constantly evolving and many more new tools are emerging daily.
While there are still challenges to be faced, such as ethical and privacy issues, the future of artificial intelligence is promising. With the continuous development and improvement of this technology, we can expect a more advanced society, with innovative solutions and significant improvements in various areas of human life.
Artificial intelligence is a driving force shaping our world with the potential to deliver ever-increasing benefits to humanity. It's an exciting time to explore and harness the power of this ever-evolving technology revolution.
Keep up-to-date with everything happening in AI here at showmetech.
Text proofread by: Pedro Bomfim (14 / 06 / 23)