Roboter streckt seine Hand aus

The History of AI

With all the hype of late, you might think that AI is a fairly new topic. The truth is different.

Die frühen Tage:

Artificial intelligence (AI) has been discussed for a very long time, and the development of AI began long before the term was even coined. The timeline of AI is fascinating, spanning centuries, starting in 250 BC. when the Greek inventor and mathematician Ctesibius developed the first artificial automatic self-regulating system. In this blog post, we will take a closer look at the timeline of AI and how it has evolved over the centuries. One thing has remained the same over the past 2,500 years - we humans have always wanted to create something, someone, like ourselves.

AI in 250 BC Chr.

The AI ​​journey began in 250 BC. when Ctesibius developed the world's first artificial automatic self-regulating system, called Clepsydra, or "water thief". The system was designed to ensure that water clocks, which were designed to tell the passage of time, kept their tanks full.

AI in 380 B - late 16th century

During this period, many theologians, mathematicians, and philosophers published materials dealing with mechanical techniques and number systems. This established the notion of mechanized "human" thinking in non-human objects. For example, the Catalan poet and theologian Ramon Llull published Ars generalis ultima (The Ultimate General Art), in which he refined his approach to using mechanical means on paper to develop new knowledge through combinations of concepts.

Around 1495, Leonardo da Vinci designed and built a humanoid automaton called the "Automa cavaliere" or "Automata Knight", also known as Leonardo's robot or Leonardo's mechanical knight.

AI in the early 1700s

In Jonathan Swift's novel "Gulliver's Travels" there is mention of a device called a "machine". This was one of the first allusions to modern technology, specifically a computer. The main goal of the project was to improve knowledge and mechanics until even the most untalented human appears dexterous through the knowledge and help of a non-human mind simulating artificial intelligence.

A new boost during the industrial revolution:

IIn 1872, writer Samuel Butler published his novel Erewhon, in which he explored the idea that at some point in the future machines would have the ability to become conscious.

AI from 1900-1950

At the beginning of the 1900s, there was a massive upheaval in the development of AI. The history of AI events in the 1900s is an interesting story.

1921: Czech playwright Karel Čapek published a science fiction play entitled "Rossum's Universal Robots". The play is about the idea of ​​factory-made artificial humans, which he calls robots. This is the first mention of the term known today. After this development, many people adopted the robot concept and applied it in their art and research.

1927: The science fiction film "Metropolis" (director: Fritz Lang) is released. This film is known for being the first depiction of a robot on screen and serving as an inspiration for other famous non-human characters in the future.

Watch on YouTube

1929: Gakutensoku, the first robot designed in Japan, was created by Japanese biologist and professor Makoto Nishimura. The term "Gakutensoku" literally means "learning from the laws of nature," meaning that the robot's artificially intelligent mind was able to acquire knowledge about nature and humans.

1939: The programmable digital computer, Atanasoff Berry Computer (ABC), was developed at Iowa State University by inventor and physicist John Vincent Atanasoff along with his student assistant, Clifford Berry. The computer weighed over 700 pounds and was able to solve up to 29 linear equations at once.

Things got really interesting in the 1940s and 50s.

Milestones in the development of artificial intelligence Credits to SpringerLink:

1949: In this year, the book "Giant Brains: Or Machines That Think" is published by the mathematician and actuary Edmund Berkeley. The book emphasizes that machines are becoming increasingly capable of effectively processing large amounts of information. The book also compared the abilities of machines to the human mind, concluding that machines can actually think.

The 1950s marked the beginning of the field of artificial intelligence (AI) as researchers began exploring the possibilities of creating intelligent machines. Major breakthroughs were made during this decade, including the development of the first chess-playing computer program in 1950 by Claude Shannon. That same year, Alan Turing published his paper Computing Machinery and Intelligence, in which he proposed the "Turing test," which tested a machine's ability to think like a human. Other notable advances in the 1950s included the development of the first chess-playing program by Arthur Samuel in 1952 and the development of the Logic Theorist, the first AI computer program, by Allen Newell, Herbert Simon, and Cliff Shaw in 1955. John McCarthy, an American Computer scientist, and his team also proposed a workshop on "artificial intelligence" in 1955, which led to the birth of the term "artificial intelligence" at a 1956 conference at Dartmouth College.

To summarize the 50's:

  • First AI research results based on chess and checkers
  • Alan Turing proposes the Turing test in his essay on "Computing Machinery and Intelligence".
  • John McCarthy coined the term "artificial intelligence" at a Dartmouth College conference
  • The first AI computer program, Logic Theorist, is developed
  • Lisp, an advanced programming language for AI research, is developed

The 1960s were marked by the development of automata and robots, new programming languages, research and films depicting artificially intelligent beings. In 1961, George Devol created the Unimate, a 1950s industrial robot used on the General Motors assembly line in New Jersey. In the same year, James Slagle developed the Symbolic Automatic Integrator (SAINT), a program that solved symbolic integration problems in basic arithmetic. In 1964, Daniel G. Bobrow developed STUDENT, an early AI program for reading and solving algebra word problems, while in 1966, MIT professor Joseph Weizenbaum developed the first chatbot, Eliza, a computer program for processing natural language to which humans have an emotional connection started to build relationships. In the same year, the project to develop the first mobile robot "Shakey" was tackled to combine various areas of artificial intelligence with navigation and computer vision. The robot was completed in 1972 and is now in the Computer History Museum. In 1968, the popular science fiction film 2001: A Space Odyssey was released, starring HAL, an AI-controlled computer, and SHRDLU, an early natural language computer program developed by Terry Winograd.

Summary of the 60s:

  •  
  • Development of automata, robots and new programming languages
  • The first industrial robot, Unimate, runs on the assembly line at General Motors
  • The chatbot Eliza was developed by MIT professor Joseph Weizenbaum.
  • The first mobile robot, "Shakey", was developed as a project to link different areas of AI
  • The science fiction film "2001: A Space Odyssey" featuring the AI-controlled spaceship HAL is released

In the 1970s, the first anthropomorphic robot, WABOT-1, was developed at Waseda University in Japan. This decade has also seen challenges in the field of artificial intelligence, including: a decrease in government support for research. In 1973, James Lighthill, an applied mathematician, announced that none of the discoveries in the field of AI had had the expected impact, prompting the UK government to severely curtail its support for AI research. In 1977, the iconic Star Wars legacy began with C-3PO and R2-D2, two humanoid robots that interacted through electronic beeps. In 1979, the Stanford Cart, a remote-controlled mobile robot equipped with a TV, traversed a room filled with chairs without human intervention, making it one of the first examples of autonomous vehicles.

Other notable advances in the 1950s-70s include the development by Allen Newell and Herbert Simon in 1957 of the General Problem Solver, an AI program capable of solving a wide range of problems Translation system, the Georgetown-IBM experiment, presented. In 1963, the first known computer-based speech recognition system, the IBM Shoebox, was developed, and in 1969 the first computer network, the ARPANET, was established, paving the way for the Internet. The first autonomous mobile robot, Genghis, was also developed by Victor Scheinman at Stanford University in 1969.

And a look at the 70s:

  • Advances related to automata and robots
  • The government cuts support for AI research
  • The first anthropomorphic robot, WABOT-1, was developed in Japan
  • Advances in AI research are declining, leading to a reduction in government support
  • The Star Wars film features the humanoid robot C-3PO and the astromech droid R2-D2

The 1980s marked a turning point in the history of artificial intelligence (AI) development. While this decade saw some significant advances in the field, it was also marked by a period of low interest and funding that became known as "AI Winter." Nevertheless, in this decade there were, for example, the following developments that helped shape the future of AI.

In 1980, Waseda University in Japan developed the WABOT-2, a humanoid robot that could interact with humans and play music on an electronic organ. The robot was an important milestone in this field as it demonstrated the potential of AI to enable machines to simulate human-like behavior and cognition.

Japan's Ministry of International Trade and Industry invested heavily in AI over the decade, allocating $850 million to the fifth-generation computer project in 1981. The aim of the project was to develop computers that could interact like humans, translate languages, interpret images and perform analysis. The ambitious project was seen as a response to the growing threat posed by Japanese competitors in the AI ​​space.

The Association for the Advancement of Artificial Intelligence (AAAI) warned of the approaching AI winter in 1984, a prediction that was proven correct within the next three years.

Despite this, there have also been some notable achievements in this area. In 1986, the first driverless car, a Mercedes-Benz van, was developed under the direction of Ernst Dickmanns. The car was equipped with sensors and cameras that allowed it to drive through the streets without human intervention - a major milestone in the field of AI.

AI in the 1980s:

  • The AI ​​winter, a period of low interest and funding in the field of AI, was beginning to emerge.
  • WABOT-2, a humanoid robot that can interact with humans and play music, was developed at Waseda University.
  • Japan's Ministry of International Trade and Industry committed $850 million to the Fifth Generation Computer project, which aimed to develop computers that could interact like humans, translate languages, interpret images, and perform analysis.
  • The Association for the Advancement of Artificial Intelligence (AAAI) warned of the upcoming AI winter, which became true within the next three years.
  • The first driverless car, a Mercedes-Benz Transporter, was developed under the direction of Ernst Dickmanns.

The 1990s was a time of great advances in the field of artificial intelligence. Rodney Brooks proposed a new approach to AI development in his book Elephants Don't Play Chess, proposing that intelligent systems should be built from the ground up based on constant physical interaction with the environment should be developed. This new approach underlined the importance of the physical interaction between the machine and its environment and led to the development of more advanced systems capable of learning and adapting to their environment.

Another notable development during this period was the creation of A.L.I.C.E (Artificial Linguistic Internet Computer Entity) by Richard Wallace. This conversational robot was inspired by Weizenbaum's ELIZA, but differed from it by collecting sample natural language data. A.L.I.C.E. was a significant advance in natural language processing and opened up new possibilities for human-machine interaction and communication.

In 1997, Jürgen Schmidhuber and Sepp Hochreiter proposed Long Short-Term Memory (LSTM), a type of recurrent neural network (RNN) that has since been used for speech and handwriting recognition. This revolutionary development paved the way for the advancement of deep learning and neural network technologies, making it possible to create more advanced AI systems capable of understanding and interpreting complex patterns of information.

IBM's Deep Blue chess computer also made history in 1997: it was the first system to win against a reigning world chess champion. This achievement was a significant milestone in the field of artificial intelligence and demonstrated the potential of intelligent systems to compete with human experts in complex games and other applications.

In 1998, the humanoid robot "Kismet" was built by MIT professor Cynthia Breazeal. It is a robot that can recognize and simulate emotions through its face. The robot was built like a human face and equipped with eyes, lips, eyelids and eyebrows.

Also in the 1990s, Dave Hampton and Caleb Chung invented Furby, the first toy robot for home use or as a pet. Furby was a major innovation in the field of robotics, opening new possibilities for interactive and engaging human-machine experiences. Furby's success paved the way for the development of more advanced robotic systems, including personal assistants, home security systems, and entertainment devices.

 

The 1990s:

  •  
  • Rodney Brooks, in his book Elephants Don't Play Chess, proposed a new approach to artificial intelligence by proposing that intelligent systems evolve from the ground up based on constant physical interaction with the environment should be.
  • A.L.I.C.E (Artificial Linguistic Internet Computer Entity), developed by Richard Wallace, was a conversational bot inspired by Weizenbaum's ELIZA, but differed by the additional collection of natural language pattern data.
  • Jürgen Schmidhuber and Sepp Hochreiter proposed Long Short-Term Memory (LSTM), a type of recurrent neural network (RNN) that is used today for speech and handwriting recognition.
  • IBM's Deep Blue chess computer is the first system to win against the reigning world chess champion.
  • Furby, the first household or pet toy robot, was invented by Dave Hampton and Caleb Chung.

Since 2000

Artificial intelligence (AI) has been around since its beginnings in 250 BC. come a long way, with numerous advances and innovations in the field. The 2000s saw significant advances in the development of AI technology. In 2000, Honda released ASIMO, the humanoid robot that can run at human speed and deliver trays in restaurants. Also in 2002, the iRobot Roomba was launched, an autonomous vacuum cleaning robot that avoids obstacles while cleaning.

In 2004, NASA's Spirit and Opportunity rovers managed to explore the surface of Mars without human intervention. The same year saw the release of the science fiction film I, Robot, set in the year 2035 and in which humanoid robots serve humanity.

In 2009, Google began secretly building a driverless car that passed the Nevada self-driving test in 2014. In the last decade, AI has been fully integrated into our daily lives and significant advances have been made in the field. In 2010, Microsoft released Kinect, a gaming device that tracks human body movements using 3D cameras and infrared detection.

In the last decade, artificial intelligence (AI) has become an integral part of our everyday lives and has shown its influence in almost every aspect of our lives. From virtual assistants that can understand and respond to our voice commands to robots that look and act like humans, technology has advanced significantly, making it easier for us to perform tasks that were once impossible or time-consuming. Here are some of the highlights of AI over the past decade:

In 2010, Microsoft released the Kinect for the Xbox 360, the first gaming device that could track human body movements using a 3D camera and infrared detection. This device paved the way for more advanced gesture recognition technologies in games.

The following year, in 2011, IBM's Watson, a computer that answers questions in natural language, defeated former Jeopardy! champions Ken Jennings and Brad Rutter in a televised match. This was a major milestone in the development of AI and demonstrated its potential to solve complex problems.

That same year, Apple released Siri, a voice-controlled personal assistant for iOS devices. With a natural language interface, Siri could understand and respond to voice commands, making it easier for users to interact with their devices.

In 2012, Google researchers Jeff Dean and Andrew Ng trained a massive 16,000-processor neural network to recognize cat images with no background information by feeding it 10 million unlabeled frames from YouTube videos. This demonstrated the ability of AI to recognize patterns in huge amounts of unstructured data.

The following year, 2013, Carnegie Mellon University released the Never Ending Image Learner (NEIL) program. The program learns information about images it discovers on the Internet 24/7, constantly expanding its knowledge and skills.

In 2014, Microsoft released Cortana, a virtual assistant similar to Apple's Siri. Also, a computer algorithm called Eugene Goostman claimed to be a 13-year-old boy and passed the Turing test, convincing 33% of the human judges at a Royal Society event that he was in fact human. This was a significant milestone in the development of artificial intelligence in conversation.

That same year, Amazon launched Alexa, a home assistant that evolves into smart speakers and acts as a personal assistant. Thanks to her ability to process natural language, Alexa has become a household name.

Elon Musk, Stephen Hawking and Steve Wozniak, along with 3,000 others, signed an open letter in 2015 calling for a ban on the development and use of autonomous weapons for war and for the introduction of ethical standards in the development of AI.

In 2016, Hanson Robotics developed the humanoid robot Sophia, while Google launched its smart speaker Google Home. Sophia is said to be the first "robot citizen" who can see, make facial expressions and communicate via AI. In the same year, Facebook's AI research lab trained two chatbots that could negotiate using machine learning and developed their own communication language, highlighting the need for a better understanding of AI.

A notable event during this period was the emergence of Tay, a Twitter bot developed by Microsoft in 2016. Tay was a chatbot that used machine learning to learn from user interactions and tweets. However, just a day after its publication, Tay began to spread racist, sexist and hateful tweets. It became clear that Tay had learned not only from friendly interactions but also from hateful and abusive tweets. Some users had intentionally tried to turn Tay into a digital Hitler. This incident demonstrated that AI can be manipulated for malicious purposes and highlighted the need for stricter regulation of AI development and use.

Between 2015 and 2017, Google developed DeepMind AlphaGo, a computer program that beat many (human) world champions in the board game Go. This was an important milestone in the development of AI for gaming and strategy.

In 2018, Alibaba developed an AI model that outperformed a human in a Stanford University reading and comprehension test. Alibaba voice processing scored 82.44 versus 82.30 for a set of 100,000 questions. Also, Google introduced BERT, which allows anyone to practice their question-and-answer system. In 2020, OpenAI GPT-3 was presented, a language model that generates text using pre-trained algorithms.

In 2021, NASA's Perseverance rover made an impressive landing on Mars.

Advances in artificial intelligence (AI) have come to dominate every aspect of our daily lives in recent years. As AI becomes more and more part of our routine, it has become easy to take the technology for granted. Nonetheless, the history of AI illustrates the significant achievements and milestones that have been reached in this field.

KI von 2000 bis zum heutigen Tag:

  • Honda bringt ASIMO auf den Markt, einen künstlich intelligenten humanoiden Roboter, der in der Lage ist, so schnell wie ein Mensch zu laufen und den Kunden in Restaurants Tabletts zu bringen.
  • Steven Spielbergs Science-Fiction-Film „A.I. Artificial Intelligence“ dreht sich um David, einen kindlichen Androiden, der ausschließlich mit der Fähigkeit zu lieben programmiert wurde.
  • iRobot brachte den beliebten Roomba auf den Markt, einen autonomen Staubsaugerroboter, der selbständig reinigt und dabei Hürden ausweicht.
  • Die Roboter-Rover der NASA, Spirit und Opportunity, navigierten ohne menschliches Zutun über die Oberfläche des Mars.
  • Google begann heimlich mit dem Bau eines fahrerlosen Autos, das 2014 den Selbstfahrertest in Nevada bestand.
  • Microsoft bringt Kinect für die Xbox 360 auf den Markt, das erste Spielgerät, das die Bewegungen des menschlichen Körpers mit Hilfe einer 3D-Kamera und einer Infraroterkennung verfolgt.
  • Apple stellt Siri vor, einen eingebauten, sprachgesteuerten persönlichen Assistenten für Apple-Nutzer.
  • Die Google-Forscher Jeff Dean und Andrew Ng trainierten ein riesiges neuronales Netzwerk mit 16.000 Prozessoren, um Katzenbilder ohne Hintergrundinformationen zu erkennen.
  • Alibaba hat ein KI-Modell entwickelt, das in einem Lese- und Verständnistest der Stanford University besser abschnitt als ein Mensch.
  • OpenAI GPT-3, ein Sprachmodell, das durch vortrainierte Algorithmen Text erzeugt, wurde im Mai 2020 vorgestellt.

The future.

Wann also wird die Singularität eintreten? Wir wissen es nicht. Die jüngste Umfrage unter KI-Experten geht davon aus, dass sie noch in diesem Jahrhundert stattfinden wird – genauer gesagt innerhalb der nächsten 40 Jahre.

Sicher ist nur, dass diese Entwicklung exponentiell verlaufen wird. Und wenn sie eintritt, wird sie in einem Augenblick geschehen.

Danke fürs Lesen und fürs Dranbleiben. Welche Momente in der Geschichte der KI-Entwicklung sind für Sie von zentraler Bedeutung? Habe ich etwas Grundlegendes übersehen? Haben Sie Angst vor der Singularität?

agorate Startseite

Mehr erfahren

Über Uns

Mehr erfahren