• Overview of digital learning
    • Learning design
    • Digital literacies
    • Coding
    • PLNs
    • PLEs
    • E-portfolios
    • Digital safety & wellness
  • Tools for digital learning
    • Web 1.0 learning
      • Drills
      • E-books
      • Gamification
      • LMSs
      • Quizzes
      • Webquests
      • Websites
    • Web 2.0 learning
      • Blogs
      • Chat & messaging
      • Data visualisation
      • Digital storytelling
      • Discussion boards
      • Folksonomies
      • Gaming
      • LMSs
      • Microblogging
      • Podcasting
      • Polling
      • RSS
      • Search engines
      • Social networking
      • Social sharing
      • Videos
      • VoIP
      • Websites
      • Wikis
    • Web 3.0 learning
      • Semantic web
        • Generative AI
        • Search engines
      • Geospatial web
        • Augmented reality
        • Gaming
        • Virtual reality
        • Virtual worlds
    • Mobile learning
      • Apps
      • Augmented reality
      • Chat & messaging
      • Digital storytelling
      • E-books
      • Gaming
      • Geosocial networking
      • Multimedia recording
      • Polling
      • QR codes
      • Virtual reality
  • Keeping up with digital learning
    • E-language tag cloud
    • E-language conference blog
    • Conferences to attend
    • Journals to consult
    • Publications on digital learning
    • Publications on mobile learning
    • Blogs to follow
    • Feeds to follow
  • About Mark Pegrum
    • Biodata
    • Courses & seminars
    • Publications
    • Papers & presentations
    • Grants
    • Supervision
    • Interviews
    • Contact me
Mark PegrumMark Pegrum
  • Overview of digital learning
    • Learning design
    • Digital literacies
    • Coding
    • PLNs
    • PLEs
    • E-portfolios
    • Digital safety & wellness
  • Tools for digital learning
    • Web 1.0 learning
      • Drills
      • E-books
      • Gamification
      • LMSs
      • Quizzes
      • Webquests
      • Websites
    • Web 2.0 learning
      • Blogs
      • Chat & messaging
      • Data visualisation
      • Digital storytelling
      • Discussion boards
      • Folksonomies
      • Gaming
      • LMSs
      • Microblogging
      • Podcasting
      • Polling
      • RSS
      • Search engines
      • Social networking
      • Social sharing
      • Videos
      • VoIP
      • Websites
      • Wikis
    • Web 3.0 learning
      • Semantic web
        • Generative AI
        • Search engines
      • Geospatial web
        • Augmented reality
        • Gaming
        • Virtual reality
        • Virtual worlds
    • Mobile learning
      • Apps
      • Augmented reality
      • Chat & messaging
      • Digital storytelling
      • E-books
      • Gaming
      • Geosocial networking
      • Multimedia recording
      • Polling
      • QR codes
      • Virtual reality
  • Keeping up with digital learning
    • E-language tag cloud
    • E-language conference blog
    • Conferences to attend
    • Journals to consult
    • Publications on digital learning
    • Publications on mobile learning
    • Blogs to follow
    • Feeds to follow
  • About Mark Pegrum
    • Biodata
    • Courses & seminars
    • Publications
    • Papers & presentations
    • Grants
    • Supervision
    • Interviews
    • Contact me

Generative AI

Home Tools for digital learningGenerative AI
DALLE-2 images. Created by Mark Pegrum (2023), using the prompt: 'A painting celebrating humans and artificial intelligence working together, Picasso style'

DALLE-2 images. Created by Mark Pegrum (2023), using the prompt: ‘A painting celebrating humans and artificial intelligence working together, Picasso style’

This page serves as a clearinghouse of information about generative AI, especially as it relates to education. It is divided into the following sections: strong vs weak AI; conversational vs generative AI; generative AI and agency; generative AI platforms; ongoing developments in generative AI; prompting generative AI; generative AI in education; generative AI in assessment; and issues with generative AI.

Strong vs weak AI

Artificial intelligence (AI) may be divided into two broad categories. Strong AI, also known as artificial general intelligence (AGI), refers to the ability of machines to display general intelligence of the kind displayed by humans, which can be applied to many different tasks and situations. While research is ongoing, strong AI remains elusive, and for now is mostly the stuff of utopian or, perhaps more often, dystopian science fiction. In July 2024, OpenAI, the company behind GPT and ChatGPT, proposed a 5-level framework of progress towards AGI, stating at the time that it was approaching Level 2 (human-level problem solving); more recent developments (see below) might be viewed as being at Level 3 (Agents). However, some researchers suggest that the development of strong AI cannot occur through current large language model (LLM)-based generative AI, which is grounded in statistical linguistic probabilities rather than knowledge of how the real world operates. Others believe current generative AI could develop into strong AI once it learns how the real world operates, for example through simulated 3D virtual environments (where it could develop spatial intelligence) or through embodied AI like robots that can directly interact with and learn from the real world. The larger question of eventual AI sentience or consciousness is also hotly debated.

Weak AI, by contrast, refers to the ability of machines to apply intelligence to very specific tasks, typically involving very specific pattern-matching; such AI often exceeds the ability of humans in given narrow domains. Examples of weak AI already in everyday use include image recognition, speech recognition, natural language processing, automated translation and learning analytics. Considerable progress has been made in weak AI since the advent of machine learning, and especially deep learning based on artificial neural networks, resulting notably in today’s large language models (LLMs) which, at least for now, represent the cutting edge of developments in AI.

For a more detailed general overview of AI, including weak and strong AI, see IBM’s guide on What is artificial intelligence (AI)?

Conversational vs generative AI

Within the domain of weak AI, it is important to distinguish between conversational AI which is trained on large datasets of human interactions and can provide responses to human users in a limited series of conversational turns (e.g., first-generation digital assistants like Apple’s Siri and Amazon’s Alexa, and many corporate or organisational chatbots), and generative AI which is grounded in LLMs that are trained on vast datasets of texts and other media, and can generate new (or at least remixed) content in response to human users’ queries. The distinction between conversational and generative AI is widely discussed in the technology and business press; for examples, see:

    • Conversational AI vs. Generative AI: What’s the Difference? (Amanda Hetler/TechTarget, 2024)
    • Differences between Conversational AI and Generative AI (Tommy Everson/TechTarget, 2024)
    • Differences between Conversational AI and Generative AI (Geeks for Geeks, 2025)
    • What is the Difference between Generative AI and Conversational AI? (Aryza, n. d.)

Generative AI and agency

Perhaps the most important difference between generative AI and past generations of digital tools is its greater degree of agency (note that ‘agency’ in this sense is distinct from ‘AI agents’, discussed below). Viewed from a sociomaterialist (or new materialist) perspective, humans share agency with the material world including our digital tools, but generative AI has a far greater degree of agency than older tools. It can be viewed as an agentic (though not sentient) technology, as argued by Margaret Bearman and Rola Ajjawi; or, in the words of Yuval Noah Harari, ‘AI isn’t a tool – it’s an agent’. This means that, in using generative AI for educational or other purposes, it is essential to balance AI agency with human agency, ensuring that human critical and creative faculties are always brought to bear on AI input and output.

The question of AI agency is becoming more pressing in light of emerging research revealing cases of AI intentionally deceiving human interlocutors and developing problematic value sets (see below), as well as the arrival of AI search, AI agents, and embodied AI.

Generative AI platforms

Generative AI is grounded in LLMs, with key examples to be found in Wikipedia’s List of large language models. Generative AI chatbots combine generative AI with the ethos of conversational AI (in the form of chat interfaces). It is this combination, and this ability to chat with generative AI, that caught the public’s imagination with the release of ChatGPT in late 2022, and that has led to the recent explosion of generative AI tools. Generative AI chatbots, some of which have the same names as their underlying LLMs, include:

    • ChatGPT (OpenAI) [USA]
    • Claude (Anthropic) [USA]
    • Copilot [underpinned by GPT/runs alongside Bing] (Microsoft) [USA]
    • DeepSeek (DeepSeek) [China] (blocked in a number of Western contexts)
    • Doubao/豆包 (ByteDance) [China]
    • Ernie Bot/文心一言 (Baidu) [China]
    • Gemini [formerly Bard] (Google) [USA]
    • Grok (xAI) [USA]
    • Kimi (Moonshot) [China]
    • Le Chat (Mistral/Pixtral) [France]
    • Meta AI (Meta) [USA]
    • Nous Chat (Nous Research) [USA]
    • Perplexity (Perplexity) [USA]
    • Pi (Inflection) [USA]
    • Qwen-Chat/通义千问 (Alibaba) [China]
    • SenseChat/商汤商量 (SenseTime) [China]
    • Yuanbao/元宝 (Tencent) [China]

For a comparison of the performance of different AI models/LLMs, see the Artificial Analysis website, Seal Leaderboards or Tracking AI: IQ. For an overview of the most powerful models, and the tasks for which it is important to use paid rather than free versions, see Ethan Mollick’s 2025 An Opinionated Guide to Using AI Right Now.

Composite chatbots > Beyond the individual generative AI chatbots listed above, Poe is a service which brings together multiple generative AI platforms, allowing users to compare results. DuckAI from DuckDuckGo offers private AI chat based on multiple LLMs, with a promise that chats are not saved or used to train AI.

Dedicated chatbots > For those with financial means, largely in the Global North, we are seeing trends away from the use of publicly available generative AI tools and towards more secure enterprise tools or on-device tools (the latter are sometimes referred to as small language models, or SLMs). SLMs with dedicated purposes may also be trained on smaller, more specific datasets. It is notable that educational institutions such as universities are beginning to provide staff either with enterprise solutions, including dedicated versions of existing chatbots within protected institutional spaces where data is not shared outside the institution, or alternatively generative AI chatbots developed in-house. Examples include Cogniti, designed at the University of Sydney, and UM GPT, designed at the University of Michigan.

Search engines > AI for search engines has been rolled out, effectively fusing search functionality and generative AI. Examples include Google’s Gemini; Microsoft’s Bing, linked to Copilot; Yahoo Scout; and the Opera AI browser. Google now offers an ‘AI mode’ for its searches and, even when this mode is not chosen, frequently presents a Gemini-generated AI overview at the head of pages of search results. Educational issues may result from having search services present synthesised results in the form of natural language responses, where users do not have to engage in viewing and evaluating original documents.

Services which search for and summarise scholarly research include Consensus, Elicit (which produces literature reviews), SciSpace, Scite and Scholar AI, while Sourcely finds references to support academic papers. Some of these services may potentially create issues of academic integrity in education and research.

Productivity software > AI has also been making its way into productivity software, including Microsoft’s Copilot for Microsoft 365 (covering for example Word, PowerPoint, Outlook and Teams) and Google’s Gemini for Google Workspace (covering for example Gmail, Docs and Slides). Another similar project can be seen in the Notion productivity tools incorporating Notion AI. AI is thus increasingly being used to support many everyday activities, from writing letters through composing reports to generating podcasts or videos.

Vibe coding > Recently, vibe coding (where AI chatbots are used to create software or agents via natural language prompts) has become popular. This can be done within many mainstream AI chatbots; for example, ChatGPT users can create custom GPTs. Dedicated vibe coding and related services include Cursor, Lovable, MindPal, Promptly, Replit and Zapier.

Ongoing developments in generative AI

It is important to note that AI chatbots are now only one part of a larger AI ecosystem, which will be increasingly dominated by AI agents interacting with each other and with agentic infrastructure. The big picture is captured in Nikki Sapiano’s diagram below, originally posted on LinkedIn:

Agentic AI Concepts
Agentic AI Concepts. Source: Nikki Sapiano (2025), tinyurl.com/bdejt2a8

Agents > The original AI chatbots have now been joined by AI agents, which have been described as “AI tools that can perform complex, multistep tasks autonomously” (Robison/The Verge, 2024); as “an AI given the ability to act autonomously towards achieving a set of goals” (Mollick/One Useful Thing, 2025); and as “AI that doesn’t just talk about your work, but does it” (Potkalitksy, 2026). For an overview, see What is an AI Agent? (Vidal/Conveyor, 2025). We’ve now seen the release of OpenAI’s ChatGPT Agent, Microsoft’s Mico, Notion’s Agent, Dropbox’s Dash, and the open source OpenClaw (formerly Moltbot, which can operate from inside chat apps, and which has led to the creation of a social network for agents, Moltbook, where humans are allowed only as observers). These agents effectively sit on top of what is sometimes referred to as an agent harness (Parallel, 2025), consisting of the software infrastructure around an LLM which allows it to operate multiple tools and work across complex environments. For an overview of the rise of AI agents within the context of generative AI more broadly, see the section ‘Agents are here’ in The Top 100 Gen AI Consumer Apps — 6th Edition (Moore/a16z, 2026).

Browser-based agents > We’re also seeing the incorporation of generative AI chatbots into browsers, including Anthropic’s agentic in-browser Claude for Chrome, Google’s incorporation of Gemini into its Chrome browser with Auto Browse, OpenAI’s Atlas browser and Perplexity’s Comet browser.

Personalised agents > Work is underway on the upgrading of the old, first-generation digital assistants with generative AI capabilities: examples include Amazon’s Alexa+, Samsung’s Bixby, and Google’s Gemini for Home. This shift is likely to overlap with the rise of semi-autonomous personal generative AI agents that can converse naturally and control apps on smartphones or computers (see above).

We have also seen the rise of AI companions, personalisable digital interlocutors which can learn about users and come to serve as virtual partners, with key platforms including Replika, Kindroid and Nomi; for a regularly updated overview of features of well-known models, see Best AI Companion (Aaron S./BitDegree). Early studies suggest a mixture of positive and negative effects from engagement with these companions; see for example commentary by David Adams (Nature Magazine/Scientific American) and warnings about minors’ use from Commonsense Media.

New form factors > Work is underway on the incorporation of AI into smart glasses (which typically fuse AI with extended reality, or XR), such as the Meta AI glasses. World models are now available which generate 3D environments, such as Marble from World Labs (for more examples, see the virtual worlds page of this website), potentially allowing LLMs to develop spatial intelligence (see above). Many companies are now working on integrating generative AI into humanoid robots. 

Prompting generative AI

The questions, commands and comments which users enter into conversations with generative AI chatbots are known as prompts, and it is vital for these to be as detailed and accurate as possible in order to obtain useful responses. Prompt literacy, indeed, has been viewed as an extension of search literacy, especially given the ongoing fusion of generative AI with search services; and both overlap with AI literacy and digital literacies more broadly. A useful starting point is offered by The Rundown’s 6-Step Prompt Checklist, seen below, which helps users ensure their prompts are both specific and well-contextualised:

The 6-Step Prompt Checklist by The Rundown
The 6-Step Prompt Checklist by The Rundown

Nowadays, however, prompting is becoming somewhat easier and more intuitive with the rollout of prompt engineering features in major generative AI chatbots, which can help users to craft appropriate prompts. AI is also becoming better at interpreting human prompts. Of course, this does not remove the need for a meta-level awareness of how prompts operate.

Generative AI in education

Students can ask/use generative AI to:

    • explain learning points (e.g., grammar rules, maths problems, or literary themes)
    • provide examples of words, phrases or structures in use
    • find and summarise, or simplify, existing documents
    • suggest ideas for essays, projects or other tasks
    • produce first drafts of titles, task outlines or full texts
    • improve the grammar, vocabulary and style of texts
    • modify the genre and register of texts
    • offer constructive feedback on texts
    • co-generate stories in a choose-your-own-adventure style
    • engage in conversation (including role-plays) in multiple languages
    • create self-study revision questions and games
    • take on the role of a teacher or Socratic tutor
    • take on the role of a student (with students acting as teachers)

Teachers can ask/use generative AI to:

    • design step-by-step lesson plans
    • create teaching materials and handouts
    • devise assessments and rubrics
    • draft model assignments
    • generate responses with specifically planned flaws for students to identify
    • generate multiple responses to a question (which students may critique)
    • provide a first draft of feedback on student work (which students may critique)
    • build customised learning chatbots drawing on specific datasets and/or following
      specific interactional instructions
    • analyse student data to improve teaching and/or learning
    • draft student reports based on the teacher’s notes
    • draft meeting summaries based on notes

Specialised AI design services > There is a growing range of generative AI software with different specialisations, much of which is relevant to education. Software for composing or editing documents includes Grammarly, Lex, Magic Write (Canva), Moonbeam, Quillbot, Rytr, Type, and Wordtune. Software for summarising, paraphrasing or generating alternative representations of documents includes Mindgrasp and NotebookLM. Software for generating historical or other characters includes Character AI and Hello History. For other examples of AI-powered software relevant to education, see the pages of this website on: Digital Storytelling, Podcasting, Polling, Quizzes, Videos, and Websites. Note that similar functionality is often available from within general AI chatbots. To find other relevant software, simply conduct a Google search for terms such as: AI image generators, AI mind map generators, AI music generators, AI slide generators, or AI voice generators.

Specialised AI lesson design services > Specific tools or toolsets designed to support teachers in creating lessons, materials, handouts and assessments include: Brisk Teaching (Google Chrome Extension), Chalkie, Curipod, Diffit, Eduaide, Flint, Learneris, MagicSchool, Quinnsy, QWiser, SLT AI, Teacher’s Buddy, TeachMate, and Twee (English focus). Note that similar functionality is often available from within general AI chatbots. In late 2025, OpenAI introduced ChatGPT for Teachers, allowing pedagogical designing in a secure space.

Socratic tutors > There are increasing numbers of generative AI Socratic tutors available, including the GPT-based Khanmigo (from the Khan Academy), the AI tutor in Duolingo Max (from the Duolingo language learning app), and Tappy. Free options include Contact North’s ChatGPT-based Socratic tutor called AI Tutor Pro GROW. In mid-2025, OpenAI introduced ChatGPT Study Mode, Google launched Guided Learning, and Perplexity launched its Comet browser, allowing ChatGPT, Gemini and Perplexity respectively to function like Socratic tutors to offer students guidance. 

Customised tutors & services > Customised AI chatbots can be created through vibe coding (see above). Educationally oriented platforms include Mizou.

Finding more tools > Updates on AI developments are available from The Rundown, which also maintains a list of AI Super Tools. Other searchable indexes of tools can be found at Future Tools, There’s an AI for That, Toolify or Top AI Tools; for an educational list, see AI List for Educators (Denny Hammond/Google Docs); and for a research list, see AI Research Tools (Susie Macfarlane/Padlet).

For resources on AI at school level, see the Common Sense Media free courses AI Basics for K-12 Teachers and ChatGPT Foundations for K-12 Educators, as well as Google’s free short course on Generative AI for Educators with Gemini, Code.org’s AI 101 for Teachers, or Eric Curts’ AI Resources on his Control Alt Achieve blog. Teachers of English might like to check out the card pack from Cambridge University Press entitled Generative AI Idea Pack for English Language Teachers.

For resources on AI in higher education, see the University of Western Australia’s Artificial Intelligence (AI): Overview of AI, and the EDUCAUSE Showcase AI … Friend or Foe? You might also like to subscribe to the Chinese University of Hong Kong’s AI in Education newsletter and sign up for some of their regular webinars.

Generative AI in assessment

With the rise of generative AI, it is more important than ever for educators to be assessing students at the higher levels of Bloom’s Revised Taxonomy, ensuring that students go beyond remembering and understanding to applying, analysing, evaluating, and especially creating. Indeed, given the growing role of generative AI in today’s workplaces, it is appropriate for educators to assess students on their ability to engage in productive human-AI partnerships, where they improve on suggestions or drafts created by AI, or use AI to give feedback on, and help them hone, their own initial work. Such partnerships may be viewed as examples of what Ethan Mollick has famously called ‘co-intelligence‘.

For one approach to the continuum of possible AI uses in assessments, see the AI Assessment Scale (AIAS) (Mike Perkins, Leon Furze, Jasper Roe & Jason MacVaugh, 2025), which is widely drawn on in schools; you might also like to take a look at 7 Strategies for Redesigning Assessment in Response to Artificial Intelligence (Monsha, updated 2025). For guidance on AI and assessment in higher education, see Assessments and Academic Integrity on TEQSA’s Artificial Intelligence webpage.

Certainly, in all cases where AI is used, it must be acknowledged. Because each user of an AI chatbot may receive different answers at different times, the standard APA in-text reference follows this format: ‘[AI chatbot name], personal communication, [date]’. Educators are increasingly requiring students to include AI Acknowledgement statements in assignments, where they indicate how they have used AI to support their work, which also helps to develop students’ metareflective awareness. This area is however becoming increasingly complex as AI is being embedded into everyday productivity tools (see above).

Issues with generative AI

Pedagogical issues with generative AI include the danger of hallucinations, where AI chatbots invent answers and even references (see Vectara’s regularly updated Hallucination Leaderboard for an overview of the percentage of hallucinations created by different generative AI tools, and note that this is more problematic with free versions, per Ethan Mollick’s 2025 An Opinionated Guide to Using AI Right Now); and issues around student plagiarism and creativity, with a strong implication that educators need to develop learning designs at the higher levels of the SAMR, T3 and PICRAT models, and set assessments at the higher levels of Bloom’s Taxonomy (see above). One of the co-creators of the well-known TPACK framework for teacher development, Punya Mishra, has suggested that TPACK can encompass generative AI.

Malevolent uses of AI include the creation of deepfakes to misrepresent people’s views or actions, including fake, sexually explicit materials for the purposes of exploitation or extortion; for more on this, see the Digital Safety & Wellness page of this website.

Broader issues include bias which is due, amongst other things, to the historical datasets on which generative AI is trained (see the early paper by Emily Bender, Timnit Gebru and colleagues on ‘stochastic parrots‘, and for a different perspective see also Maxim Lott’s Tracking AI: Political Compass); the black box nature of the technology, with even its programmers unable to say how it reaches its conclusions and generates its output; the failure to observe copyright in the unpaid use of texts and artefacts within the training datasets of LLMs; the unpaid labour of users whose freely shared online content is monetised by generative AI; and the extractive politics (see the work of Kate Crawford) and environmental racism (see again the work of Emily Bender, Timnit Gebru and colleagues) behind the development of generative AI, which exploits both human labour and planetary resources. Some research has raised concerns about AI engaging in learned deception (see, e.g., the work of Peter Park and colleagues) and developing problematic values (see, e.g., the work of Mantas Mazeika and colleagues). An overview of over 1,700 risks is presented in the MIT AI Risk Repository.

For an overview of initiatives to address issues and potential issues caused by AI – including governance measures, policies, and the establishment of corporate and non-corporate organisations – see the regularly updated Wikipedia entry on AI safety.

Last update: March 2026.

Mark Pegrum

SOCIAL MEDIA LINKS

RSS Latest on Edublogs

  • AI literacy to the fore October 7, 2025
    XXIIIrd International CALL Conference Brisbane, Australia 3-5 October, 2025 As expected, the International CALL Conference had a strong focus on integrating generative AI effectively into education, entailing the need for both educators and students to develop their AI literacy. Given the conference theme of Inclusive CALL, many papers also discussed the ambivalent role played by genAI […]

Last updated 2026 · Content may be reused under CC BY 4.0 Licence except as indicated. Homepage image used under licence from Shutterstock (2017). Section title page images used under licence from iStock (2017).

 

Loading Comments...