AI 101 – An introduction to the world of artificial intelligence
Everybody is talking about artificial intelligence and its applications. We are currently in a phase of exploration. Despite a critical attitude towards AI technologies, they can fuel innovation and open up new scope for action to promote sustainable development as well as social transformation together with students and practice partners. Let's try to clarify the key terms.
In a nutshell: The key terms
What is artificial intelligence?
Artificial intelligence – AI for short – includes technologies such as chatbots and image generators. However, the underlying concept goes much further. At its core, the idea is to enable machines to achieve or even surpass human intelligence. AI enables machines and computers to perform cognitive processes such as learning, language comprehension, pattern recognition, and autonomous problem-solving. Technologies in AI research include Fuzzy Logic, Machine Learning, Evolutionary Algorithms, Natural Language Processing, and Artificial Neural Networks. These form the basis for developing AI models that can be trained for specific or wide-ranging functions. The concept of artificial intelligence aims much further than where we currently stand.
What are artificial neural networks (ANNs)?
Artificial neural networks (ANNs) are models whose architecture is based on natural neural networks, such as the human brain. They consist of artificial neurons whose connections are continuously reweighted during training using large data sets. This reweighting through feedback aims to ensure that the network learns generalized patterns to react effectively to unknown data patterns after training. Many of today’s popular large language models such as GPT-4, BERT, or Llama use the Transformer architecture, a special form of ANN introduced in 2017.
What are AI models?
AI models are computer-aided networks that are developed either for specific tasks (task-specific models) or a broad range of applications (foundation models). They learn to recognize and generate patterns and are trained based on data sets rather than being programmed like other systems.
Task-specific models focus on specific applications, whereby the quality of the training data is crucial. They are efficient in narrowly defined areas, such as specialized translation software. Foundation models, on the other hand, use extensive data sets to cover a wide range of functions. They are more complex and can handle a wide range of tasks. These models include Large Language Models and Multimodal Models, which are characterized by a high number of parameters.
What are Large Language Models (LLM) and Natural Language Processing (NLP)?
Large Language Models (LLM) are AI systems that can process and generate texts. They are a type of foundation model and are based on extensive neural networks. They can analyze and interpret texts and create new content, which makes them valuable for applications such as chatbots and translation programs.
Natural Language Processing (NLP) is the underlying technology that enables LLMs to process human language. NLP comprises methods and algorithms that enable computers to read, understand, and interpret human language. It is the key to how LLMs can understand texts and generate useful responses.
Together, LLM and NLP form the basis for many modern AI applications that require language understanding and generation and are increasingly important for research and academic text work.
What is an AI application?
An AI application is the implementation of an AI model in real application areas. It is software that uses an AI model to perform specific tasks or functions. The term AI system is often used synonymously with AI application, especially in technological contexts.
Within AI systems, further categories can be distinguished, such as generative AI systems and recognizing AI systems. The trend is towards combining different AI models in complex software environments and with other AI models and programs. Examples of this are the linking of the DALL-E image generator with ChatGPT (Multimodal), or the integration of Wolfram Alpha in Perplexity AI, which enables mathematical equations to be solved in addition to AI-supported searches.
We have compiled AI applications for different university contexts for you in our AI toolbox.
What are ChatGPT, Google Bard, and HuggingChat?
ChatGPT, Google Bard, and HuggingChat are conversational chatbots, i.e., AI applications that naturally interact with users linguistically without being limited to predefined questions and answers. They are based on Large Language Models, whereby only the source code of HuggingChat is public as an open-source model. These chatbots specialize in processing and generating natural language, whereby the quality of the responses depends heavily on the requests made – the so-called prompts.
What is a prompt?
A prompt is an input command or instruction given to a computer program. In AI, especially in language models, a prompt serves as a trigger for generating responses or content. You can find out how to formulate a good prompt in our prompt guide.
Instructions for practice
What needs to be considered when using AI technologies?
All AI models can make mistakes, mislead, or hallucinate. Their output can be based on deviant or discriminatory representations of reality or even replicate and reinforce discrimination and social grievances. Machine intelligence also emerges from social structures, builds on – in some cases precarious – human labor, and requires enormous amounts of energy, especially in the case of generative AI. This makes a critical approach to AI particularly important, even if the use of AI tools can be beneficial. Helping to shape technological change through AI technologies in a critical and reflective manner is currently challenging universities and will continue to occupy all disciplines, departments, and also the university as an organization. If you are working in these contexts – especially with LLMs – pay attention to the following points:
- Transparency & classification: It is necessary to critically scrutinize and classify generated content. As a consequence, this also requires the accessibility of models and training data.
- Originator AI: Generative AI makes it increasingly difficult to recognize content as artificially generated. However, this is particularly relevant for assessing students’ independent performance.
- Explainability: The comprehensibility of an AI’s decisions is often limited because they cannot be explained based on specific rules or code passages.
- Creativity & randomness: Generative AI often creates new, unpredictable results through random variations, even with the same input prompts.
Expanding teaching with the AI Campus
In general, machine intelligence is also increasingly involved in complex, practice-oriented situations across all disciplines. That’s why it makes sense for students to have a basic understanding of how AI works and how it can be used. You can find content that promotes this understanding and that degree programs can include in their curricula on the AI Campus. The KI-Campus is a free digital teaching platform funded by the Federal Ministry of Education and Research (BMBF). There you can find numerous different learning opportunities for different scenarios:
- Videos for integration into your teaching, for example on the basics of generative AI
- Podcasts as accompanying content for your teaching, for example on the social impact of AI
- Complete courses, for example “AI for everyone: Introduction to artificial intelligence“
- Of course, you can also help shape teaching about AI together with the AI Campus. One example project by members of the TH Cologne is the Data Literacy Basic Course.
How can AI be implemented in competence-oriented teaching?
In competence-oriented teaching, students acquire specific skills in human-machine interaction beyond a simple substitution logic – “AI will do that for me!”.
The narrative of artificial intelligence, often in duality or even in competition with human intelligence, may be less helpful for teaching skills. You could alternatively speak of machine intelligence, for example.
Therefore, check your subject-specific learning outcomes to see whether AI technologies are relevant to the specific skills to be acquired. When adapting learning objectives, these should be clearly formulated. The learning objectives can address AI technologies directly, such as the learning content of the AI Campus, or include AI as a helpful tool – i.e., play a role in the WITH WHAT with which learners achieve the learning objective. Then this human-machine interaction should become part of your teaching concept.
If, for example, an action can easily be automated at an appropriate level by an AI chatbot, it may make sense to design the corresponding learning unit without AI as a resource in the classroom. The WHAT FOR of the learning objective must not be lost in the process. Particularly in the case of competences that are to be learned without AI, it is important to make it clear to students why they need to be able to act responsibly in areas that machine intelligence could also automate. In this context, an intensive examination of developments in research and practice in the respective discipline must always be taken into account.
Use the THKI GPT-Lab to integrate an AI chatbot into a teaching-learning scenario that is easily accessible by all students. You can find further suggestions for useful AI applications in our AI toolbox. In addition, you can exchange ideas about your AI teaching concepts in our Brownbag-Sessions and our KI@THK space. For inspiration, you are welcome to take a look at the FUH Use Cases or browse through the open prompt catalog of the AI Campus.
Any questions?
Contact digitalelehre@th-koeln.de!
Links & Literature
- Deutscher Ethikrat, Stellungnahme: Mensch und Maschine – Herausforderungen durch Künstliche Intelligenz (Kapitel 3)
- Entwicklungsagentur Rheinland-Pfalz: Gutachterliche Stellungnahme zu den Auswirkungen künstlicher Systeme im Speziellen und der Digitalisierung im Allgemeinen auf das kommunale Leben in Rheinland-Pfalz 2050
- Spielarten der Künstlichen Intelligenz: Maschinelles Lernen und Künstliche Neuronale Netze (Fraunhofer-Institut für Arbeitswirtschaft und Organisation, Blogbeitrag)
- Center for Research on Foundation Models (CRFM), Stanford Institute for Human-Centered Artificial Intelligence (HAI), Stanford University: On the Opportunities and Risks of Foundation Models
- Technische Hochschule München, Ludwig-Maximilians-Universität München, Universität Tübingen: Positionspapier: ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education
- Fraunhofer-Institut für Naturwissenschaftlich-Technische Trendanalysen: Natural Language Processing
- Basiswissen zu Künstlicher Intelligenz auf dem KI-Campus
- Hessisches Kultusministerium: Handreichung “Künstliche Intelligenz (KI) in Schule und Unterricht” (Abschnitt: Glossar)