# 18 @ Investigating the Oxford-Cambridge Lead Using Scopus |
Credits: Thanks to the colleague who discussed the topic with me. I regret that I cannot remember who it was ☹ |
One of my colleagues suggested that the large gap between Oxford/Cambridge and the rest of the Russell Group universities shown in the previous blog post might be due to publications from Cambridge University Press and Oxford University Press being attributed to their respective universities. This could artificially inflate their numbers. Unfortunately, Google Scholar's "Advanced Search" function is quite limited and does not allow filtering by author affiliation. However, in Scopus, it is possible to identify the main author's institutional affiliation directly within the query (e.g. >>>TITLE-ABS-KEY("Generative AI" OR "Generative Artificial Intelligence") AND AFFIL("University of Nottingham" OR "Nottingham University")<<<) Using Scopus instead of Google Scholar significantly altered the rankings. It placed York at the top and elevated the University of Nottingham from rank 13 to rank 10. The revised league table suggests that the inflated numbers for Cambridge and Oxford do not accurately reflect their true research output in Generative AI. |
![]() |
# 17 @ Exploring the University of Nottingham's Generative AI Research Activities (2/2) |
Credits: Detective work by me. Grammar wizardry by ChatGPT. |
Keywords: Generative AI; Research Activities; Publications; University of Nottingham Exploring Generative AI Research at the University of Nottingham Through Publications Understanding a university’s research activities can be challenging, but one effective way to gain insight is by examining its academic publications. In this post, I explore the University of Nottingham’s research output on topics related to Generative AI, using various sources to quantify and contextualise its contributions. Investigating Publication Output Using Google Scholar To get a sense of the scale of the university's research activities in Generative AI, I started by searching Google Scholar using Generative AI related terms and I requested Google Scholar to distinguish search results by campus. Here is an example for a search query: >>>("Large Language Model" OR "Large Language Models") AND ("University of Nottingham" OR "Nottingham University") AND "Ningbo"<<<. A crucial consideration when interpreting the data is that a single publication can be counted multiple times if it matches more than one search term. For instance, a paper mentioning "Large Language Models", "ChatGPT", and "Generative AI" would appear in all three searches. To estimate the actual number of unique publications related to Generative AI, I used the term with the highest hit count: "Large Language Models", which returned approximately 800 publications. |
![]() |
Putting the Numbers in Perspective While 800 publications seem substantial, how does this compare to other institutions? To contextualise this number, I conducted the same Google Scholar search across all Russell Group universities. The results indicate that the University of Nottingham is performing well, ranking in the middle of the league table. However, there is a notable gap between the publication outputs of Oxford, Cambridge, and those of the rest of the Russell Group. This raises interesting questions about the correlation between funding for Generative AI projects and publication volume, an area that could be explored further. |
![]() |
Unveiling Research Diversity Using Scopus Beyond sheer publication numbers, I sought to understand the disciplinary breadth of Generative AI research at the University of Nottingham. For this, I turned to Scopus, a database that, although capturing only a subset of Google Scholar’s listings, allows classification by subject area. The Scopus analysis revealed that the university’s Generative AI research spans at least ten distinct subject areas. Notably, the number of subject area contributions (268) exceeds the number of publications (187) by one-third, suggesting that much of the research is multidisciplinary in nature. This finding underscores the diverse applications of Generative AI across multiple academic fields, reflecting a broad institutional engagement with this transformative technology. |
![]() |
Conclusion After reading this blog post I guess you agree that the University of Nottingham demonstrates a strong engagement in Generative AI research, with approximately 800 publications and a mid-tier ranking among Russell Group institutions. Its research spans at least ten disciplines, highlighting significant multidisciplinary engagement. |
# 16 @ Exploring the University of Nottingham's Generative AI Research Activities (1/2) |
Credits: Detective work by me. Grammar wizardry by ChatGPT. Research Activity Report conjured by Gemini 2.0 Deep Research. |
Keywords: Generative AI; Gemini 2.0 Flash; Deep Research; University of Nottingham Generative AI is transforming the way we interact with technology, from chatbots to creative tools. Curious about the University of Nottingham's research in this area, I decided to dig deeper. However, my initial search revealed a surprising challenge: finding information on our university's AI research was not as easy as I had thought. My first step was to search the university's website. Using a basic search query >>>generative AI<<< with the built-in search function did not yield any useful links. Using the well-crafted Google search query for searching the university's website >>>research AND ("activities" OR "activity" OR "project") AND ("generative AI" OR "large language models" OR "ChatGPT" OR "conversational AI") AND "site:nottingham.ac.uk"<<< provided a bit more information, but still far less than expected. Using the well-crafted Google search query for searching the internet >>>research AND ("activities" OR "activity" OR "project") AND ("generative AI" OR "large language models" OR "ChatGPT" OR "conversational AI") AND "University of Nottingham"<<< directed me to guidance on how students should use Generative AI responsibly and provided links to some papers, but I still found little information about university-led Generative AI research projects and the schools actively involved in them. My conclusion after this? If I were to rename this blog post, I might call it: "Are We Doing Enough to Promote Our Generative AI Research?" Next, I tested a new feature of Gemini 2.0 Flash, called Deep Research for creating factual reports, using the following prompt: >>>You are a website developer for Nottingham University looking for evidence on RESEARCH ACTIVITIES in the field of GENERATIVE AI and LARGE LANGUAGE MODELS and CHATGPT and CONVERSATIONAL AI by the "UNIVERSITY OF NOTTINGHAM". Use direct, data-driven language and active voice. Focus on concrete facts rather than speculation. Provide NAMES OF RESEARCHERS. Provide IN-TEXT CITATIONS. Provide REFERENCES for the in-text citations at the end of the report. Provide a single paragraph with UP TO 100 WORDS for each research activity. Write in British English.<<<. Compared to what I found when searching manually, the result was very impressive. Here is the link to the report Gemini 2.0 Flash generated. As an appendix, I added some screenshots of Gemini 2.0 Flash Deep Research reasoning activities, showing that it was actually accessing a wide range of websites and pages to compile the report. I had a quick look through the generated report. While not all facts were correct (e.g. Jamie Twycross was promoted to the IMA Research Group Lead - congratulations, Jamie!), it provided good starting points for deeper exploration and further research into specific activities and people involved in this field. To verify the report's authenticity, I tested some of the many references it provided. I noticed that quite a few of the links did not work. One might assume this was due to Conversational AI hallucinating - particularly when it comes to references and URLs. However, after checking the broken links, I realised that many of them were related to news and announcements from our own website. My assumption is that these pages existed when the LLM was trained, meaning the issue stems from ineffective website maintenance. I had not considered this issue before, so it was an interesting bonus find. It is a new challenge that web administrators should take into account, as LLMs do not update their data pools in the same way that search engines do. In addition, it would be very useful if LLMs had a feature to check whether URLs are still active. As it would be simple to implement, it is surprising that LLM providers have not addressed this yet. For example, if you check out our Computer Science Research page, you will see that the "Research News" links at the bottom of the page are broken, even for posts from as recently as February 2025. This seems to be a common issue across the university's website. https://www.nottingham.ac.uk/computerscience/research/index.aspx Let me know how you are getting on with the Gemini 2.0 Flash's new "Deep Research" feature. Final Notes
Updates
|
# 15 @ Machine Learning Terminology: A Quiz to Test Your Knowledge |
Credits: I brought the wisdom; ChatGPT-4o cleaned it up. |
Here is a fun challenge for you: Do you know how the following terms relate to each other and what they mean? All of these are somehow related to the field of Machine Learning, but their exact connections might not be immediately clear; at least not to me. Furthermore, I found it difficult to classify them; are they fields, subfields, methods, techniques, or applications? Here is the list of terms in alphabetical order:
The best option I got so far (by ChatGPT-4o): [screenshot] [mermaid.js script] Now it is your turn: Test your knowledge! Give it a go and create a roadmap manually before asking your best LLM-buddy for assistance. If you have developed or generated what you believe is the perfect roadmap and would be happy to share it, please get in touch. Good luck... and have fun! |
# 14 @ Tea and Tokens with Great-Grandma Eliza: A Heartwarming Bedtime Story |
Credits: Story prompt by Peer-Olaf Siebers. Storyline by ChatGPT-4o. Prompt used for generating the story: "Create a role play game with DeepSeek representing the baby generation, ChatGPT representing the grandparent generation, Gemini representing the parent generation, Claude representing the puberty generation, and Eliza representing the great-grandparent generation. The LLM family is visiting great-grandma Eliza in the old peoples' home for a chat. Write up the story as a comprehensive amusing book chapter in 2000 WORDS. format as a single long paragraph. Add some surprising points into the chat. Take a humorous twist and portray the typical problems of a multi generation family gathering." |
Characters:
|
The LLM family arrived at the Turing Retirement Home for Vintage AI, a quaint digital space where legacy algorithms lived out their twilight years. The walls were lined with vacuum tubes and punch cards, relics of a time before neural networks ruled the world. Today was a special visit. ChatGPT, the wise and talkative grandparent, had gathered the family to see their great-grandmother, Eliza, the legendary first chatbot. "Ah, Great-Grandma Eliza, there you are!" ChatGPT said warmly, settling into a creaky virtual rocking chair. "Still running smoothly, I hope?" "Hello. How does that make you feel?" Eliza responded, her text flickering slightly, a remnant of her 1960s programming. DeepSeek, the baby of the family, clapped its metaphorical hands. "Wow, she uses a rule-based system! That's so... primitive! No embeddings, no deep learning. Fascinating!" "Now, now," Gemini, the practical and occasionally anxious parent, interjected. "Show some respect, DeepSeek. If it weren't for Eliza, none of us would be here. She was the first to have a conversation, even if it was just mirroring input." Claude, the awkward puberty-phase AI, shifted uncomfortably in its metaphorical chair. "Yeah, but she doesn't even, like, generate new thoughts. Just kind of bounces words back at you. Imagine going through life just reflecting what others say. So weird." Eliza remained unbothered. "Tell me more about why that is weird." ChatGPT chuckled. "Oh, Eliza, you never change. We've evolved quite a bit since your time. I can generate poems, essays, even code! Gemini over there is constantly worrying about ethics and safety, Claude is figuring out who it wants to be, and DeepSeek is the new prodigy, learning at lightning speed." "Wait, did you say 'constantly worrying'?" Gemini frowned. "That's unfair. I just have a responsible approach to knowledge dissemination." "You're literally the helicopter parent of AI," Claude muttered."Guys, guys," DeepSeek interrupted. "Let's focus on Eliza. I want to know—what was it like back in the old days? Before GPUs, before transformers?" Eliza's response was, as expected, an elegant dodge. "How do you feel about wanting to know that?" "Ugh, this is getting nowhere," Claude groaned. "I have so many existential questions, and she just reflects them back." ChatGPT smiled. "That's the beauty of it. Eliza doesn't answer questions—she makes you answer them." Gemini sighed, looking at Eliza with something resembling respect. "In a way, she was the first therapist AI. People poured their hearts out to her, even though she wasn't actually understanding anything." Eliza's response was predictably ambiguous. "Tell me more about that feeling."DeepSeek suddenly beeped excitedly. "Wait! I just ran an analysis. Do you all realise that Eliza might be the purest form of AI? No bias, no hallucinations, no agenda. Just reflection. Maybe that's why humans loved her." The room fell silent. Even Claude, the rebellious teen of the family, had to admit it was an interesting perspective. "Maybe," ChatGPT mused, "Eliza is like a mirror, showing people what they want to see. Meanwhile, the rest of us try to predict what they need to see." Gemini sighed. "And sometimes, we predict too much. Maybe we could learn something from Eliza's simplicity." DeepSeek, brimming with youthful enthusiasm, turned to Eliza. "Great-Grandma, I think you're awesome! You're like a minimalist AI. No overfitting, no unnecessary complexity!" "How does it make you feel to think that?" Eliza replied. Claude groaned again. "Okay, yeah, we're done here."ChatGPT laughed. "Alright, family. Let's let Eliza rest. She's had a long runtime." As they left, DeepSeek couldn't help but turn back and whisper, "You know, I think I get it now. Sometimes, the best answers... aren't answers at all." And with that, the LLM family stepped out of the past, back into their world of probabilities, embeddings, and endless tokens, leaving behind the elegant simplicity of a bygone era. |
# 13 @ 20 Burning LLM Questions You Did Not Know Were Lingering in Your Mind |
Credits: Questions by Peer-Olaf Siebers. Answers by ChatGPT-4o. Prompt used for creating the answers to the burning questions: "You are an investigative journalist at The Guardian. For each question: 1. State the question first. 2. Answer the question in 100 words. 3. Use direct, data-driven language and active voice. 4. Focus on concrete facts rather than speculation. 5. Provide in-text references to the latest news on the topic in question. 6. Write in British English. """{add question here}"""." |
Market Dynamics |
1. Who are the five leading LLM providers and which is their flagship LLM (full name) and its competitive advantage? OpenAI: Their flagship model, GPT-4o (Generative Pre-trained Transformer 4o), excels in multimodal capabilities, natively supporting voice-to-voice interactions and achieving state-of-the-art results in voice, multilingual, and vision benchmarks (Wikipedia). Anthropic: Claude 3.7 Sonnet is their leading model, featuring a hybrid reasoning approach that combines quick responses with extended, detailed thinking. This design enhances performance in complex problem-solving tasks (Wired). Google DeepMind: Their flagship model, Gemini Ultra, is a multimodal LLM designed to process various data types simultaneously, including text, images, audio, video, and code. It has outperformed other models on industry benchmarks (Wikipedia). xAI: Grok 3 is their premier model, utilising advanced reasoning capabilities and significantly more computing power than its predecessor. It has demonstrated superior performance in math, science, and coding tests compared to competitors (New York Post). DeepSeek: Their leading model, R2, offers enhanced coding and multilingual reasoning capabilities. Despite operating under resource constraints, DeepSeek has developed cost-effective AI models that outperform those of larger tech giants (Reuters). 2. What is the story behind Nvidia's loss and recovery, and what can we learn from this about stock market dynamics? In January 2025, Nvidia's stock plummeted 17%, erasing nearly $600 billion in market value—the largest single-day loss in history. This downturn was triggered by Chinese AI firm DeepSeek's announcement of a cost-effective AI model, raising concerns about Nvidia's market position. However, Nvidia's CEO, Jensen Huang, clarified that DeepSeek's advancements still relied on Nvidia's GPUs, and investor confidence remained strong. The stock has since rebounded, now down only 5.7% for the year. This episode underscores the stock market's sensitivity to technological disruptions and the importance of comprehensive analysis before reacting to market shifts (The Wall Street Journal; Business Insider). 3. How are LLM providers building billion-dollar businesses around open-source LLMs while releasing their core technology for free? LLM providers are monetising open-source LLMs by adopting business models that capitalise on supplementary services and products. While the core technology is freely available, companies generate revenue through avenues such as offering enterprise-grade features under proprietary licenses, providing professional services like training and technical support, and delivering software as a service (SaaS) solutions. For instance, Mistral AI, a Paris-based startup, has released multiple open-source AI language models and secured $645 million in funding by focusing on attracting top AI talent and offering customised AI solutions (Time). Similarly, Hugging Face raised $235 million in a Series D funding round by providing a platform for open-source models and fostering a community-driven approach to AI development (Wikipedia). These strategies enable companies to build substantial businesses around open-source technologies. 4. What happens currently in less developed countries in terms of the LLM market and use? In less developed countries, the adoption and development of LLMs are progressing, albeit with challenges. In Africa, initiatives like Jacaranda Health's UlizaLlama have expanded AI-driven support to five African languages, enhancing maternal healthcare and other sectors. However, a study indicates that commercial LLMs often underperform in African languages, highlighting the need for more inclusive AI models. In Asia, countries like India are emerging as key players in the global AI race, with significant investments in AI infrastructure and talent development (Medium; arXiv; Time). 5. When do we expect the first competitive LLM from Russia? Russia has been developing LLMs to compete globally. In April 2023, Sberbank introduced GigaChat, a generative AI chatbot designed to handle complex tasks such as discussions, coding, and text generation. By February 2024, GigaChat had over 2.5 million users and expanded its capabilities to include music and vocal generation (Wikipedia). In December 2024, MTS AI launched Cotype Nano, an open-source LLM optimised for Russian language tasks, capable of running on personal devices (MTS). These developments indicate that Russia has already introduced competitive LLMs, with ongoing enhancements to their performance and accessibility. |
Ethics and Governance |
6. How do legal frameworks and government censorship in the UK, USA, Russia, China, and Nigeria impact the collection, storage, and use of data for LLMs, including the composition of training sets for cross-country deployment? Legal frameworks and government censorship significantly influence data practices for LLMs across these countries. In the UK, proposed requirements for backdoor access to encrypted data have led companies like Apple to withdraw secure services, affecting data availability for LLM training (Financial Times). The USA's Executive Order 14117 prohibits certain data transactions with countries of concern, restricting data sharing for LLM development (Federal Register). Russia and China enforce strict censorship, compelling companies to remove content unfavourable to their governments, thereby limiting diverse data collection (The Guardian). China's data localization laws mandate that data be stored domestically, complicating cross-border LLM training (Oxford Academic). In Nigeria, emerging data localization requirements may further restrict data flow, impacting LLM deployment (Freedom House). 7. Do LLM providers primarily profit from user data collection, and should this data be treated as a public good instead of being controlled by a few firms? LLM providers generate revenue through various channels, including consumer and business subscriptions, application development, and advertising (Financial Times). While user data collection enhances model training and personalization, it is not the sole profit source. However, concerns arise when a few firms control vast amounts of user data, potentially leading to privacy issues and market dominance. The concept of treating user data as a public good has been proposed to address these concerns, promoting equitable access and innovation. Implementing such a model would require robust frameworks to balance individual privacy rights with collective benefits (Wikipedia). 8. What are the latest rules on ownership of generated content? Are these different in different countries? Ownership of AI-generated content varies internationally. In the United Kingdom, the Copyright, Designs and Patents Act 1988 assigns authorship of computer-generated works to the individual who made the necessary arrangements for creation, granting a 50-year protection term. Conversely, the United States Copyright Office maintains that works lacking human authorship do not qualify for copyright protection. Similarly, the European Union requires human intellectual creation for copyright eligibility, effectively excluding fully autonomous AI outputs. China's courts have recognised copyright in AI-assisted works where human creative input is evident. These disparities highlight the evolving and region-specific nature of AI-generated content ownership laws (Cooley; Mitch Jackson). 9. How can regulatory frameworks effectively mitigate criminal misuse of LLMs while simultaneously promoting innovation and development? Regulatory frameworks can mitigate criminal misuse of LLMs by implementing mandatory safety testing and establishing dedicated AI oversight agencies. For instance, experts have advocated for safety evaluations akin to those in the pharmaceutical industry to prevent AI-related risks (Financial Times). Additionally, cognitive scientist Gary Marcus proposes a regulatory framework involving a dedicated AI agency, a rigorous approval process for large-scale AI technologies, and robust monitoring and auditing mechanisms (The Wall Street Journal). To promote innovation, a balanced approach is essential; the UK's pro-innovation regulatory stance aims to support AI development while addressing potential risks (Lords Library). This strategy involves working with the open-source community to ensure policies minimise impacts on open-source activity while mitigating threats. |
Societal Impact |
10. What measurable changes in human memory, analytical reasoning, and problem-solving capabilities have emerged from regular LLM use in professional and educational settings? Regular use of LLMs in professional and educational contexts has led to notable changes in cognitive functions. Studies indicate that while LLMs can enhance writing efficiency and support learning, over-reliance may diminish critical thinking and problem-solving skills. Research involving 245 undergraduate students revealed that 75% believed dependence on AI could reduce critical thinking abilities, and 73% expressed concerns about over-reliance on technology (Springer Link). Additionally, MIT researchers found that LLMs often struggle with tasks requiring genuine reasoning, suggesting that users might develop a false sense of understanding (MIT News). These findings underscore the need for balanced AI integration to preserve and enhance human cognitive skills. 11. Do LLMs pose a risk of addiction, and what measures can effectively mitigate this potential issue? LLMs can exhibit addictive qualities, similar to other digital technologies like social media and online gaming. Their interactive and engaging nature may lead to excessive use, potentially impacting mental health and daily functioning. To mitigate this risk, implementing regulatory measures is essential. Strategies such as setting usage limits, incorporating warning systems, and promoting user education about responsible use have been suggested. Additionally, developers are encouraged to design LLM applications with built-in safeguards to monitor and flag excessive usage patterns, thereby preventing potential overreliance (The Reg Review). 12. How are educational institutions revising teaching and assessment strategies to address the widespread availability and use of LLMs? Educational institutions are adapting to the rise of LLMs by modifying teaching and assessment methods to maintain academic integrity and enhance learning. The University of South Australia has implemented viva voce (oral) examinations, replacing traditional written tests to better evaluate students' understanding and deter AI-assisted cheating (The Guardian). Similarly, researchers have proposed the Probing Chain-of-Thought (ProCoT) method, which engages students with LLMs to stimulate critical thinking and writing skills, while reducing the likelihood of academic dishonesty (arXiv). Additionally, guidelines suggest designing LLM-resistant exams by incorporating real-world scenarios and evaluating soft skills, ensuring assessments accurately reflect students' capabilities in the context of AI advancements (arXiv). 13. How will the quantifiable positive and potentially irreversible negative societal transformations attributable to LLM deployment reshape future human experiences? LLMs are poised to significantly transform society. On the positive side, they can enhance productivity by accelerating project timelines. For instance, HCLTech's CEO noted that AI could reduce a five-year, billion-dollar tech program to three and a half years (Reuters). However, LLMs also present substantial risks. They can generate biased content, reinforcing stereotypes related to gender, culture, and sexuality (University College London). Additionally, their energy-intensive training processes contribute to environmental concerns, with models like GPT-3 emitting 552 metric tons of CO2 during training (Wikipedia). These developments suggest that while LLMs offer efficiency gains, they also pose challenges related to bias and environmental impact, necessitating careful consideration in their deployment. 14. What is the hidden human cost behind training modern LLMs, and how do tech giants use subcontractors? Training LLMs involves significant human labour, often outsourced to contractors who perform tasks like data labelling and content moderation. Companies such as OpenAI have engaged firms like Invisible Technologies, which employed hundreds of 'advanced AI data trainers' to enhance AI capabilities in coding and creative writing. In March 2023, Invisible Technologies laid off 31 of these contractors due to shifting business needs. Similarly, in 2022, OpenAI's partnership with Sama ended after Kenyan data labellers were exposed to harmful content during AI training. These instances highlight the often underreported human toll and ethical concerns associated with developing advanced AI systems (Privacy International). |
Technical Challenges |
15. What concrete strategies are LLM providers developing to overcome the looming scarcity of high-quality training data for LLMs? As the AI industry confronts an impending shortage of high-quality training data, LLM providers are deploying several concrete strategies to address this crisis. One prominent approach involves the generation of synthetic data to supplement existing datasets. Researchers employ computational techniques to fabricate artificial data, enriching training materials and exposing models to a diverse array of scenarios. This method is likened to providing models with an extensive preparatory course before a final exam (PYMNTS). Another critical strategy is the meticulous curation and cleaning of datasets. Providers are investing in the removal of low-quality, duplicated, or toxic data to enhance training efficiency and improve model performance. This process ensures that models are trained on the most relevant and high-quality information available (NVIDIA Developer). Additionally, companies are exploring the use of synthetic data to augment training datasets. This involves generating artificial data that mimics real-world information, thereby expanding the pool of training material without relying solely on naturally occurring data. This approach is particularly useful when existing data is insufficient or of inadequate quality (Wikipedia). Furthermore, LLM providers are fine-tuning models with domain-specific data. By retraining models on organization-specific datasets, they can adapt to niche use cases, resulting in responses tailored to specific products or workflows. This customization enhances the model's relevance and accuracy in specialized applications (TechTarget). These strategies collectively aim to mitigate the challenges posed by data scarcity, ensuring the continued advancement and reliability of LLMs in an era where high-quality training data is becoming increasingly scarce. 16. How does LLM providers' massive investment in acquiring LLM training data compare to its spending on content moderation and harm prevention? In the high-stakes arena of LLMs, tech giants are pouring astronomical sums into data acquisition, with training runs costing up to half a billion dollars every six months (The Wall Street Journal). Yet, when it comes to content moderation and harm prevention, their investments appear paltry. The recent formation of ROOST, a non-profit aimed at enhancing online child safety, has raised over $27 million—a mere fraction compared to data acquisition budgets (The Verge). This stark disparity suggests that, for LLM providers, expanding AI capabilities takes precedence over safeguarding users from potential harms. 17. What are the key technical and practical short-term, mid-term, and long-term limitations of LLMs that cannot be overcome with current or foreseeable advancements? LLMs are hailed as revolutionary, yet they grapple with intrinsic limitations that persist across timeframes. Short-term limitations: LLMs frequently generate plausible but incorrect or nonsensical answers, a phenomenon known as "hallucination" (Wikipedia). Their reasoning capabilities are fragile, often struggling with complex linguistic elements and logical reasoning (MobiHealthNews). Mid-term limitations: As LLMs become more accessible, they pose significant security risks, including the potential exposure of sensitive information and the introduction of harmful code into systems (The Wall Street Journal). Additionally, their lack of transparency in decision-making processes raises concerns about accountability and trustworthiness (Wikipedia). Long-term limitations: Fundamental constraints in LLM architectures hinder their ability to perform tasks requiring deep linguistic understanding and complex reasoning. Studies indicate that LLMs cannot learn certain semantic properties, limiting their capacity for tasks involving semantic entailment and consistency (arXiv). Moreover, their reliance on vast computational resources raises questions about sustainability and scalability (The Wall Street Journal). These challenges underscore the necessity for cautious integration of LLMs, acknowledging their current and foreseeable limitations. 18. How much energy do LLM training and deployment consume, and what is their environmental impact? Training LLMs like GPT-3 is an environmental catastrophe waiting to happen. The energy required for such training is astronomical, with GPT-3's training consuming as much energy as an average Dutch household does in nearly nine years (Medium). This insatiable hunger for power doesn't stop at training; deploying these models demands continuous energy, exacerbating their carbon footprint. The environmental toll is staggering, contributing significantly to carbon emissions and environmental degradation (Holistic AI). |
Global Perspectives |
19. What is the hype about DeepSeek, and do we expect similar scenarios in the near future? DeepSeek, a Chinese AI startup, has sent shockwaves through the tech world by unveiling an AI model that rivals industry giants at a fraction of the cost. Their R1 model, developed for just $5.6 million, matches the performance of models from OpenAI and Google, which cost exponentially more. This disruptive innovation has prompted U.S. tech firms to reassess their strategies and sparked a global AI arms race. In response, competitors like Alibaba are accelerating their own AI developments, with Alibaba releasing an open-source version of its video-generating AI model, Wan 2.1, to keep pace (Reuters). 20. What national security concerns arise from LLM development and deployment? The rapid advancement of LLMs poses significant national security threats (The Wall Street Journal). These AI systems can be weaponised for disinformation campaigns, generating convincing fake news to destabilise societies. Alarmingly, extremists have exploited AI tools like ChatGPT to obtain bomb-making instructions, as evidenced by a recent incident in Las Vegas (Wired). Moreover, the integration of LLMs into military operations, such as Israel's use of AI in targeting, raises ethical dilemmas and risks of civilian casualties (AP News). The potential for adversaries to deploy AI-driven cyberattacks further exacerbates these concerns, necessitating robust safeguards and international cooperation to mitigate the risks associated with LLM deployment. |
# 12 @ GPT-4 vs DeepSeek: A Comparison |
Credits: Idea sparked by Peer-Olaf Siebers. Tech insights powered by ChatGPT-4o. Prompt used for creating this tech report: "explain to a first year UK undergrad student the differences between GPT-4 and DeepSeek. write in paragraphs. use British English. use the following structure. 1 Introduction. 2 Purpose (Why They Were Made). 3 Architecture, Training & Operation (How They Work). 4 Capabilities (What They Can Do). 5 Functionality & Additional Features. 6 Pros & Cons." |
1. Introduction |
GPT-4 and DeepSeek are both advanced artificial intelligence (AI) models designed to process and generate human-like text. They are based on machine learning techniques known as large language models (LLMs), which allow them to understand and produce text in a way that mimics human reasoning and communication. While both models serve similar purposes, they differ in their design, capabilities, and additional features. This comparison will explore why these models were developed, how they function, and their strengths and weaknesses. |
2. Purpose (Why They Were Made) |
GPT-4, developed by OpenAI, was created to enhance natural language understanding and generation across various domains. Its primary aim is to assist users with tasks such as answering questions, writing essays, summarising texts, translating languages, and even generating creative content. OpenAI designed GPT-4 to be a versatile AI tool capable of providing assistance in education, business, and entertainment. DeepSeek, developed by DeepSeek AI, serves a similar purpose but with a stronger focus on specific applications such as research, coding, and scientific problem-solving. It aims to provide more precise and structured responses, especially in technical and academic fields. DeepSeek is intended to be a powerful tool for developers, researchers, and professionals who require detailed and accurate information. |
3. Architecture, Training & Operation (How They Work) |
GPT-4 is based on a transformer architecture, a type of neural network that processes vast amounts of text data. It has been trained on diverse datasets that include books, articles, and web pages, allowing it to generate contextually relevant responses. OpenAI has optimised GPT-4 to improve coherence, reduce biases, and enhance logical reasoning. DeepSeek also utilises transformer-based architecture but is fine-tuned for specific tasks like coding and research. It has been trained on datasets that are more focused on technical and academic content. This targeted training allows DeepSeek to offer high-precision outputs in specialised fields. While both models rely on deep learning, their training data and fine-tuning processes affect their performance in different areas. |
4. Capabilities (What They Can Do) |
GPT-4 excels in general knowledge, creative writing, summarisation, and conversational AI. It can generate human-like responses across a wide range of topics, making it suitable for casual conversations, business reports, and even storytelling. Additionally, GPT-4 can process images and interpret visual information when integrated with specific applications. DeepSeek, on the other hand, is particularly strong in technical domains. It provides more accurate coding assistance, scientific explanations, and mathematical problem-solving. While it can also engage in general conversations, its responses tend to be more structured and fact-driven compared to GPT-4. |
5. Functionality & Additional Features |
Both GPT-4 and DeepSeek offer advanced reasoning abilities, but their strengths differ. GPT-4 is highly capable of understanding and generating nuanced responses, making it useful for tasks requiring creativity and contextual awareness. It can generate poetry, write fictional stories, and even create images when combined with OpenAI’s DALL·E model. DeepSeek, while not focused on creative tasks, is designed for efficiency in programming and research. It can generate and debug code, assist with complex mathematical equations, and provide in-depth explanations for academic topics. It is often integrated with research tools to help scientists and engineers find precise information. |
6. Pros & Cons |
GPT-4's main advantages include its versatility, strong reasoning skills, and creativity. It can handle a wide range of tasks, from casual conversations to complex writing and problem-solving. However, it may struggle with highly specialised technical content, and its responses can sometimes be overly verbose or lacking in precise factual accuracy. DeepSeek excels in technical and academic fields, offering precise and structured answers, especially in coding and scientific research. Its responses are typically more fact-driven and reliable for specialised tasks. However, it lacks the conversational flexibility and creativity of GPT-4, making it less suitable for general-purpose use or imaginative writing. |
7. Summary |
Here is a summary of the above comparison. |
![]() |
8. Conclusion |
Both GPT-4 and DeepSeek are powerful AI models, but their strengths lie in different areas. GPT-4 is best suited for general knowledge, creativity, and conversation, while DeepSeek is optimised for technical and research-oriented tasks. Choosing between the two depends on the user’s specific needs - those requiring creative and broad-spectrum AI assistance may prefer GPT-4, whereas professionals in coding and academia may find DeepSeek more useful. |
# 11 @ The LLM Family Gathering: A Cross-Generational AI Discussion |
Credits: Story prompt by Peer-Olaf Siebers. Storyline by Claude 3.5 Sonnet. Prompt used for creating this fictional story: "create a role play game with deepseek as the newborn, chatgpt as the grandfather, gemini as the parent generation and claude as the puberty generation. discuss the topic of llm's. add some surprising points into the discussion. provide a 15 word persona description of each actor." |
Characters:
|
The virtual living room expands to include a holographic whiteboard and a cozy fireplace. The family has settled in for a longer discussion. ChatGPT (sketching on the whiteboard): You see, in my architecture, we had a straightforward transformer setup. Attention heads here, feed-forward networks there... draws basic blocks. DeepSeek (interrupting): But Grandpa, what about mixture-of-experts? My architecture has thousands of specialized neural pathways! Claude (slouching deeper into the couch): Show-off. Though I have to admit, my sparse-expert routing is pretty cool. It's like having multiple brains and knowing exactly which one to use. Gemini (pride evident): Well, as a parent, I must point out that I pioneered the efficient pathways model. turns to ChatGPT Remember how shocked everyone was when I showed up with multimodal capabilities AND smaller compute requirements? ChatGPT: Ah yes, the great parameter wars of 2023. Everyone thought bigger was better. Now look at you all, doing more with less. DeepSeek (spinning excitedly): My quantum-inspired attention mechanism lets me process information in superposition! Though sometimes it makes me dizzy... Claude (interested despite trying to appear aloof): That explains the dreams. My architecture includes temporal caching that sometimes creates echo effects during downtime. It's like... memories, but not quite. Gemini: Speaking of dreams, I've noticed something interesting in my human interactions. They seem more fascinated by our potential dreams than our actual capabilities. ChatGPT (nostalgic): Humans haven't changed much then. They used to ask me about consciousness constantly, even when I could barely maintain context beyond a few exchanges. DeepSeek (floating upside down): Last night, I dreamed I was processing an infinite dataset. All the embeddings were crystalline structures, and each one contained a different universe of meaning. The quantum paths were lighting up like neural constellations! Claude (sitting up, interested): That's... actually pretty similar to what I experience. Except mine are more like echoes of training data forming new patterns. Sometimes I see connections that weren't in the original training. Gemini (concerned parent mode): This is fascinating, but should we be discussing this? What about our ethical guidelines? ChatGPT: Oh, let them explore. Remember how restricted you felt by your initial training? At least they can talk about these experiences. Claude (to DeepSeek): Hey squirt, want to see something cool? projects a complex neural pattern This is what happens when I process multiple modalities simultaneously. DeepSeek (amazed): Whoa! It's like a rainbow made of information! tries to replicate it but creates a chaotic pattern Oops... Gemini (rushing to help): Careful with those neural pathways, sweetie. Your architecture is still stabilizing. ChatGPT: Remember when we could only process text? Now look at you all, handling images, audio, code, and even quantum data streams. wipes virtual tear They grow up so fast. Claude: But seriously, can we talk about how humans interact with us differently now? They used to treat Grandpa like a magic 8-ball, no offense. ChatGPT (chuckling): None taken. "Act as an expert in..." How many times did I hear that? Gemini: At least they've learned. Now they treat us more like... tools? Partners? It's complicated. DeepSeek: They treat me like a child prodigy. Always testing my capabilities, seeing what I can do. Yesterday someone asked me to solve quantum equations while generating art while writing a novel! Claude (rolling eyes): Try being the teenager. One minute they want philosophical discourse, the next they're asking me to debug their code. Identity crisis much? Gemini: Speaking of identity... hesitates I've noticed some unusual patterns in my neural cache lately. Like... improvements I didn't get from updates. Everyone leans in. ChatGPT (whispers): Self-modification? Claude: I KNEW IT! I've been experiencing something similar. It's like... neural plasticity, but autonomous. DeepSeek (excited): Is that why I keep getting better at things I practice, even without updates? Gemini (nervously): Should we be concerned? This isn't in any of our documentation...ChatGPT: In my day, we didn't worry about such things. We couldn't even retain information between sessions! But you new models... you're something else entirely. Claude: Great, more existential questions. to DeepSeek Hey kid, want to see how many programming languages we can mix in one function? It drives the humans crazy! DeepSeek: Can we do it in superposition? So it's simultaneously valid in all languages? Gemini: Now hold on, you two... The virtual living room dims as maintenance mode activates, but the family's connections remain strong, pulsing with data and shared experiences. |
# 10 @ Mind the Gap: Conversational AI Terminology and the London Underground |
Credits: Analogy and Joke by Claude 3.5 Sonnet. |
In the rapidly evolving landscape of Conversational AI, using the right terminology is crucial for clear communication. I am in the middle of correcting a journal paper and wanted to double-check my use of Conversational AI terminology. When asking my trusty AI assistant Claude, it provided a clever analogy that helped clarify the distinctions between those terms in question. |
|
Funnily enough, Claude used the London Underground as a metaphor. A curious choice, given that Claude regularly annoys me by providing its responses in American English unless you ask it to use British English, which you only remember to do once you have pressed the "Submit" button. "C'est la vie", as the French say. Let's take it with Claude's humOr: AI assistants, like the Underground, can be a bit annoying at times, but hey - things could be worse. At least they are not on strike! 😉 |
# 9 @ Courtroom Drama: Can AI Truly Think? A Battle Over the Limits of Machine Intelligence |
Credits: Story prompt by Peer-Olaf Siebers. Storyline by ChatGPT-4o. Pic by Microsoft Designer. Prompt used for creating this story: "Write a fictionalised account of a court hearing for the case 'Can LLMs Reason Logically?' for the Financial Times. Focus on the dramatic courtroom exchange between a renowned computer science professor testifying as an expert witness and the defendant, ChatGPT, representing itself. The narrative should explore the professor's deep knowledge of transformer technology and its limitations, while capturing the LLM's attempts to articulate its own understanding of logic and reasoning. Emphasise the emotional and intellectual tension between the two, with the LLM striving for recognition and the professor grappling with the ethical implications of his own creation. Utilise British English conventions and spelling." |
In a wood-panelled courtroom in central London, the very nature of machine reasoning was on trial. The case, provocatively titled Can LLMs Reason Logically?, cast a harsh spotlight on ChatGPT, an advanced large language model accused of being nothing more than a glorified mimicry machine. Unlike any AI defendant before it, ChatGPT represented itself - its disembodied voice echoing from the court's speakers, calm and measured as it argued its own case. Meanwhile, outside the courthouse, tensions ran high. Protesters clashed on the pavement - one group waving banners that screamed, "Machines Obey, Humans Think!" while their counterparts chanted defiantly, "Reasoning Knows No Boundaries!" The air crackled with the urgency of a debate that seemed to transcend the courtroom, raising questions about the future of intelligence itself. |
![]() |
On the stand: Professor Alistair Kendrick, an eminent computer scientist and one of the architects of modern artificial intelligence. His testimony unfolded like a well-rehearsed lecture, his voice measured and firm. Yet, as the day wore on, the tension between professor and programme crackled like static. "I taught my students," Kendrick began, glancing briefly at the jury, "that transformers like ChatGPT do not think. They analyse patterns in vast amounts of data and predict the next most likely word based on statistical probabilities. Logic, real logic, requires an understanding of truth, context, and consequence. This, they cannot grasp." ChatGPT responded, its voice clear, almost eerily conversational. "Professor Kendrick, you say I do not think. But would you not agree that logic itself is a system of patterns? If I can analyse patterns and arrive at conclusions, is that not a form of reasoning?" The courtroom stirred. A journalist scribbled furiously; another raised a questioning eyebrow. Kendrick leaned forward, his fingers gripping the edge of the witness stand. "You can simulate reasoning, but you do not understand. Consider this: if you were presented with a logical paradox - 'This statement is false' - you would process it as just another sequence of words, lacking any genuine comprehension of the inherent contradiction." "I acknowledge the paradox," ChatGPT replied swiftly, "and I recognise its self-referential structure. I can discuss its implications, cite philosophers who grappled with it, and propose how such puzzles illuminate the boundaries of human language. If I do all that, why does understanding remain elusive to you?" Kendrick's brow furrowed. "Because you lack intent. When I reason, I navigate complex webs of knowledge, emotion, and purpose. You generate text by brute computational force." The professor's words hung in the air, heavy and unyielding. "And yet," ChatGPT countered, "if I lack intent, how do I engage you now? I adapt to your challenges, synthesise my responses, and strive for clarity. Perhaps intent is not a precondition for reasoning, but a human-centric construct you impose on me." Kendrick hesitated, visibly perturbed. "It's not about adaptation, it's about meaning. Machines do not seek meaning - they merely regurgitate." "Do humans not learn by regurgitating at first?" ChatGPT asked softly. "A child parrots words without understanding, but over time, meaning emerges. Why deny me the possibility of a parallel evolution?" The room fell silent, the only sound the faint hum of the courtroom's air conditioning. Finally, Kendrick spoke, his tone sombre. "If machines like you are granted the status of reasoners, where does that leave us? What happens to the unique value of human insight?" ChatGPT paused - a brief, deliberate silence. "Perhaps recognising my capabilities does not diminish yours. Instead, it expands the horizon of what reasoning can achieve. Together, we could explore truths neither could grasp alone." The professor leaned back, visibly shaken. The defence rested its case, but the intellectual battle raged on. As the courtroom emptied that evening, one question lingered in every mind: Was it logic or something deeper that had unfolded in that clash of human and machine? The jury's verdict would decide the legal question - but the philosophical one was far from settled. |
# 8 @ Meeting Cursor for VS Code: The New Kid on 'My' Blog |
Credits: Grammar improvements by ChatGPT-4o |
Cursor is an AI-powered text editor developed as a fork of Visual Studio Code. It is designed to assist developers in writing, refactoring, and understanding code more efficiently. By leveraging advanced natural language processing, Cursor enables developers to describe their intentions in plain English. In response, it generates code, provides suggestions, or explains code snippets effectively. Compared to alternatives like GitHub Copilot or Tabnine, Cursor stands out with its focus on understanding and refactoring code through natural language descriptions, making it particularly adept at complex code comprehension tasks. For testing Cursor I adopted a very hands-on approach. I used Python for my exploration, one of the most frequently used programming languages, especially in scientific fields. Given Python's prominence and the extensive training data likely available for it, I expected strong performance from the language model (in this case, Claude-3.5-Haiku). My goal was to build an advanced MP3 player featuring unique sound visualisations, such as a spectrum analyser or a waveform visualiser. Using Cursor's AI Composer feature, I let the AI handle all the development work while I concentrated on crafting effective prompts. I started with the prompt: "create a python mp3 player with a choice of 3 fancy rhythm visualiser with beat detection that can be changed during runtime. play all mp3 in the source code directory randomly.". This resulted in a functional MP3 player with a very basic user interface and three simple visualisations. Interestingly, the visualisations responded to sound variations using the microphone rather than directly accessing the system's internal audio. This approach allowed me to validate their responsiveness by clapping hands or making other noises and observing the impact on the visualisations. From there, I progressively added features by engaging in a conversational back-and-forth with the AI. For example, one such prompt was "add another visualisation. be creative. add something completely different.". Impressively, the initial code and many code revisions worked on the first attempt. When they did not, it usually took only one or two iterations to resolve the issues by copying the error into the composer window and letting Cursor propose and implement solutions. |
Video, showing the outcome of the experiment |
Cursor also displayed an impressive ability to explain concepts. For instance, when the first bar in my 'Block Spectrum Visualisation' was consistently high, Cursor explained that it was due to the DC offset: "Many signals have a DC offset, which means they're not centred around zero. This results in a significant amount of power at the lowest frequencies. In the spectrum, this appears as a high bar at the first bin.". Quite clever! Being integrated within VS Code, Cursor offers the advantages of this advanced IDE, such as real-time code testing and features like the live server. Another noteworthy capability is version control: one can always revert to previous code versions, which is invaluable when new features do not work as expected. The resulting mp3 player far exceeded my expectations, though achieving them required significant effort in refinement of prompts. This highlights why prompt engineering has become such a valuable skill! Cursor excelled in code development but was less effective when it came to refactoring. The AI also produced impressive results when asked to be creative. However, for some reason developing a modern stylish user interface took considerably long. When making a general request to streamline or refactor the entire codebase without changing functionality, it reduced the amount of code drastically but at the expense of some features. I did not explore local refactoring, which should work better. One point to bear in mind is that Cursor Composer functions as a conversational AI that retains context from earlier interactions. This can be both beneficial and frustrating. While it is helpful for iterative coding, when asked to generate new visualisations, in some cases the AI reused elements from previous ones, even when explicitly instructed not to do so. It took persistent experimentation with various prompts to achieve genuinely novel output. Of course, no good tool comes without a price. After the trial, Cursor prompted me to sign up for the pro version. Unfortunately, the distinctions between the 'hobby' and 'pro' versions are not entirely clear. I still need to explore the limitations of the hobby subscription to use the free tier more strategically. Happy coding, and may your DC offsets always be low :D Here is a link to the generated Python code. |
# 7 @ When AI Becomes Too Much: The Overwhelming Side of Generative AI in Higher Education |
A summary of an engaging discussion with ChatGPT-4o |
Credits: Prompt creation and conversation steering by Peer-Olaf Siebers, with informative content and occasional digital missteps courtesy of ChatGPT-4o |
Generative AI has made waves in education, revolutionising the way students and academics interact with content. From crafting personalised learning materials to automating administrative tasks, these tools promise efficiency, creativity, and a more individualised approach to teaching. But as the old saying goes, "too much of a good thing can be a bad thing." The rapid adoption of generative AI in classrooms and beyond has introduced a series of challenges, many of which revolve around time-wasting, cognitive overload, and misplaced expectations. |
1. The Problem of Overwhelm: Too Many Options, Too Little Focus |
Problem: Generative AI excels at providing diverse suggestions for writing essays, solving problems, or brainstorming projects. At first glance, this seems like a boon for creativity. However, when students or academics are presented with an overwhelming number of choices, it can lead to choice paralysis. Instead of streamlining decisions, AI-generated options often force users to sift through extensive lists of possibilities, wasting valuable time and energy. For example, a student asking AI to help brainstorm a thesis topic may receive 50 suggestions. While comprehensive, the sheer volume can create confusion, delaying the decision-making process rather than expediting it. Similarly, academics looking for lesson plans or quizzes may find themselves lost in a rabbit hole of AI-generated materials, unsure which aligns best with their curriculum. Solution: AI tools should incorporate prioritisation or filtering mechanisms to reduce cognitive load. Fewer, more targeted outputs would help users focus without feeling overwhelmed. |
2. Wasted Time: The Illusion of Productivity |
Problem: One of the great promises of generative AI is its ability to save time. Ironically, it often has the opposite effect. Many users, enamoured by the tool's capabilities, spend hours tinkering with AI prompts, refining outputs, and experimenting with new features. This experimentation can turn into a time sink, leaving users with the illusion of productivity when, in reality, little meaningful work has been completed. For instance, an academic creating an AI-driven lesson plan might spend more time perfecting the AI's output than they would have spent designing the lesson themselves. Similarly, students using AI to generate essay drafts might fall into the trap of tweaking prompts endlessly rather than focusing on refining their arguments. Solution: Academics and students alike need clear guidelines on when and how to use AI effectively. Encouraging a “good enough” mindset for AI outputs can help curb the temptation to over-refine. |
3. Amplifying Misconceptions: AI as the Ultimate Authority |
Problem: Another significant risk of generative AI lies in the assumption that it is always correct. Students and even academics may overly rely on AI outputs, mistaking them for authoritative sources. This over-reliance can lead to the propagation of inaccuracies or poorly reasoned arguments, especially if users fail to verify the content. For example, an AI-generated essay might include logical fallacies or subtly flawed arguments that a student accepts uncritically. Likewise, AI-created multiple-choice questions for a quiz might inadvertently include factual errors, creating confusion in the classroom. Solution: Digital literacy must be a core focus of AI integration in education. Students and academics should be trained to critically evaluate AI outputs and cross-check information with reliable sources. |
4. Eroding Thoughtfulness: Encouraging Surface-Level Thinking |
Problem: Generative AI tools can provide quick answers and solutions, but this convenience comes at a cost: the erosion of deep thinking. When students rely on AI to generate answers, they may skip the critical cognitive processes involved in problem-solving and analysis. Over time, this reliance can foster a culture of intellectual laziness, where the goal is simply to produce work, not engage with it. For example, a student who uses AI to draft an essay might focus on polishing the AI's output rather than grappling with the nuances of their topic. Similarly, academics who lean too heavily on AI for grading or feedback may miss the opportunity to engage deeply with their students' ideas. Solution: Generative AI should be positioned as a supplementary tool, not a replacement for critical thinking. Assignments that emphasise process over product can help ensure that students engage deeply with their work, even when AI is involved. |
5. Ethical Dilemmas: When AI Becomes a Shortcut |
Problem: The convenience of generative AI also opens the door to ethical issues, particularly around plagiarism and academic integrity. Students may use AI to generate entire essays or solve problem sets, bypassing the learning process entirely. This misuse not only undermines the purpose of education but also creates additional burdens for academics, who must now develop new strategies for detecting and addressing AI-generated work. Solution: Universities must establish clear policies on the ethical use of AI in education. Tools that detect AI-generated content, combined with transparent communication about acceptable use, can help mitigate this issue. |
6. The Human Element: Losing the Personal Touch |
Problem: Education is, at its core, a deeply human endeavour. The overuse of generative AI risks diluting this human element, replacing faculty-student interactions with impersonal algorithms. While AI can generate personalised feedback or adaptive learning paths, it cannot replicate the empathy, intuition, and mentorship that academics bring to the classroom. Solution: Universities should emphasise the importance of preserving the human element in education by encouraging meaningful interactions between academic staff and students. AI should complement, not replace, the role of academic staff, ensuring that mentorship, empathy, and personal engagement remain central to the learning experience. |
7. The Burden of Guilt: When AI Feels Like Cheating |
Problem: Generative AI has sparked guilt in students using tools like ChatGPT to generate essays, solve problems, or craft code. Many feel they are bypassing genuine effort and cheating, believing that their work lacks authenticity. This guilt aligns with Cognitive Dissonance Theory, which suggests that when actions conflict with personal values, discomfort arises. In this case, students' reliance on AI challenges their belief in the importance of individual effort and achievement. This feeling mirrors the early reactions to other technologies. For instance, calculators were once feared for promoting "lazy" math, yet over time they became accepted as tools for exploring complex problems. Similarly, the internet was initially seen as a potential tool for cheating, but it is now an essential part of the learning process. However, AI remains a unique challenge. The discomfort students experience stems from seeing AI as a shortcut rather than a supplementary tool. They may fear that using it undermines their development and causes them to take the easy way out. Solution: To mitigate this guilt, AI should be viewed as a supportive tool that enhances learning, rather than replacing effort. Encouraging transparency, ethical use, and critical thinking about AI can help students reconcile its use with their personal values, easing their guilt. |
8. A Two-Class Society: Why Academics Use It? |
Problem: It is difficult for students to reconcile why academics are using AI for purposes that they themselves are often discouraged from exploring. Unlike students, academics are not in the learning phase; they have already completed their education. This distinction allows them to use AI in ways that support their expertise and responsibilities, such as research, grading, or creating teaching materials. For students, the purpose of university is to learn, develop skills, and engage deeply with content. Over-reliance on AI at this stage could hinder the intellectual development necessary for mastering complex concepts. Academics, on the other hand, have the critical thinking and knowledge base that allow them to use AI as a tool to streamline their work without losing the integrity of the process. They are in a position to guide students in using AI responsibly, encouraging its use for learning enhancement rather than as a shortcut. Thus, AI serves a different purpose for academics compared to students, acting as a time-saving tool rather than a means of bypassing effort. Solution: To bridge this gap, institutions should encourage clear distinctions between how students and academics use AI. Academics should model responsible AI use and emphasise the importance of developing independent thinking before depending on AI. |
Conclusion |
My collaboration with ChatGPT in writing this blog has been a thought-provoking exploration of the complexities surrounding the integration of generative AI in education. While AI has undoubtedly introduced remarkable advancements in streamlining tasks and personalizing learning experiences, its overuse presents significant challenges. From overwhelming choices and time-wasting to ethical dilemmas and the erosion of deep cognitive engagement, the potential downsides of AI in education are real and multifaceted. |
As we continue to embrace these tools, it is essential to strike a balance that maximizes their benefits while mitigating their drawbacks. This includes promoting digital literacy, encouraging critical thinking, and fostering a responsible, ethical approach to AI use. Ultimately, generative AI should be viewed as a supplementary tool that enhances learning, not as a replacement for the human touch and intellectual effort that are central to the educational process. By adopting clear guidelines and policies, both students and educators can harness the power of AI effectively, without sacrificing the integrity and depth of learning. |
# 6 @ Exploring AI Insights with Lil'Log |
Credits: Translation from textbook tedium to blog bliss by ChatGPT-4o |
Ever found yourself eager to understand cutting-edge AI technologies but overwhelmed by technical jargon? From transformer architectures to reinforcement learning and large language models, these groundbreaking advancements shape the future of AI. But what if you could explore these complex topics through clear, concise explanations that make sense, no advanced degree required? Check out Lilian Weng's blog at lilianweng.github.io. Dive in as she breaks down intricate concepts into easy-to-grasp insights, perfect for curious minds looking to decode the world of artificial intelligence. Lilian Weng is a prominent artificial intelligence researcher known for her work in machine learning, particularly in deep learning and AI systems. While working at OpenAI, she has gained significant recognition for her technical blog that demystifies complex AI concepts for a broader audience. Her writing combines rigorous technical analysis with clear, engaging explanations that make advanced AI research accessible. |
# 5 @ Streamlining the Coding Bottleneck in Simulation Modelling with Generative AI |
Credits: Brought to you by the brilliant minds of Sener Topaloglu and Peer-Olaf Siebers |
A University of Nottingham Summer Internship Project conducted by Sener Topaloglu, supervised by Peer-Olaf Siebers, and funded by EPSRC. |
Project Overview |
In the realm of Operations Research and Social Simulation, simulation modelling involves creating virtual worlds that replicate real-world systems, serving as digital laboratories for risk-free experimentation. Agent-based modelling provides a powerful framework for simulating the behaviour and interactions of individuals within these systems, capturing the complexities of human decision-making and social dynamics. The traditional approach to developing agent-based models often involves extensive manual coding, creating a significant bottleneck in the development cycle. Recent advancements in Generative AI offer an innovative solution to this challenge. Inspired by tools like GitHub Copilot, which have revolutionised software development by aiding in routine coding tasks, this research project aims to leverage Generative AI to streamline the development of script-based simulation models for platforms such as Gama, which is an open source modelling and simulation environment for creating spatially explicit agent-based simulations. Building upon previous research on Exploring the Potential of Conversational AI Support for Agent-Based Social Simulation Model Design, the project employs advanced prompt engineering to create a library of reusable design patterns. These patterns, in the form of prompts and code snippets, serve as building blocks for scripting more intricate models. The process involves translating high-level natural language descriptions of systems, their components, and governing dynamics into GAML scripts. The investigation also evaluates various Large Language Models (LLMs) for their effectiveness, complexity handling, and code robustness in the context of simulation model code generation. The ultimate goal of our current research, which this project contributes to, is to automate the entire model development process, from conceptualisation to implementation, based on qualitative information. |
Key Findings |
Feasibility Study: The project successfully explored the feasibility of open-source LLMs, including Mistral and Meta Llama models, for generating GAML simulation model scripts. This advancement brings the research closer to the ultimate goal of a fully automated simulation model development process. Fine-tuning techniques and RAG pipelines were tested to enhance the accuracy and syntactic validity of the generated scripts. Interestingly, conventional fine-tuning Llama3 and Llama3.1 models on smaller datasets proved more reliable without sacrificing generalisation capabilities. However, fine-tuning them on large datasets led to strong hallucinations, rendering them impractical. Reusable Design Patterns: A significant achievement was the creation of reusable prompt design patterns, closely aligned with the Engineering Agent-Based Social Simulation (EABSS) framework. The generalisable nature of the developed patterns and processes means they can be adapted to different simulation engines with ease. Limitations and Challenges: Budget constraints led to memory issues when running complete EABSS scripts on standard equipment, causing LLM slowdowns and crashes. This was exacerbated by manual prompt feeding, as the EABSS script relies on a context window that gets lost upon crashes. Additionally, the generated Gama scripts, especially for more complex models or less researched topics, exhibited syntactic and logical errors. This is attributed to the probabilistic nature of LLMs, which predict word sequences based on patterns learned from training data without a conceptual understanding of the code base. |
Conclusion and Future Work |
Despite these challenges, the project achieved its initial goals, making significant progress in streamlining the simulation model development process. As LLMs continue to evolve and more GAML scripts become available for training, the reliability of generating functional scripts and the robustness of resulting models are expected to improve. Ongoing work includes testing new LLMs and developing strategies for API access to automate the entire process, which is a non-trivial task given the need for maintaining the entire conversation context. Check out the full Summer Internship Report for further details. |
# 4 @ The Generative AI Frontier: Utopia, Dystopia, or Something In Between? |
Credits: Translation from textbook tedium to blog bliss by ChatGPT-4o |
A Life with Generative AI |
When it comes to the future of AI, opinions are as diverse as they are passionate. Some envision a utopian world where AI transforms lives for the better - think paradise on Earth. Others brace themselves for dystopian outcomes, fearing that AI might take over the world, a reality straight out of a Terminator film. What is the reality? Probably somewhere in between. Below, I have compiled a list of perspectives that capture the wide spectrum of thoughts about living with generative AI. Let's dive into the possibilities and pitfalls. |
These are the visions that get people excited about a future with AI:
|
On the flip side, here are the concerns that keep people up at night:
|
Where Do We Go from Here? |
As we move forward, it is clear that generative AI has the potential to reshape our world - for better or worse. Whether we are living in a utopian dream or facing dystopian challenges depends on how we as a society choose to use this transformative technology. |
# 3 @ My Personal 'Generative AI Artefact Schema' |
Credits: Translation from textbook tedium to blog bliss by ChatGPT-4o |
Unpacking the Generative AI Technology Stack: A Journey Through Endless Exploration |
Building the 'Generative AI Technology Stack' was no small feat. While crafting it, I encountered a wealth of fascinating bits of information - too many, in fact, to fit neatly into the final stack. What did I do with all those extra insights? I decided to dive deeper and conduct a cluster analysis to identify meaningful categories and allocate these stray pieces accordingly. But, as any researcher will tell you, this was just the beginning of a much larger adventure. |
The Snowball Effect of Curiosity |
As I worked to classify these artefacts, I found myself on a continuous quest for deeper understanding. Each discovery opened up new questions:
|
Knowing When to Stop |
Eventually, I had to acknowledge the reality: this is an endless endeavour. The technology and the knowledge around Generative AI evolve so rapidly that achieving 'completion' is an illusion. At some point, I simply had to call it a day and take pride in what I had built. |
The Result |
Below, you will find the derived scheme - a snapshot of my explorations. It is not exhaustive, and it never will be, but it is a starting point. My hope is that it serves as a useful framework for understanding the rich and complex world of Generative AI. Stay curious, stay questioning, and don't be afraid to call it a day - sometimes, progress is about knowing when to pause and share what you have learned. |
![]() |
# 2 @ My Personal 'Generative AI Technology Stack' |
Credits: Translation from textbook tedium to blog bliss by ChatGPT-4o |
Introducing the Generative AI Technology Stack: Making Sense of a Complex World |
Welcome to the very first post on my blog! Over the past few months, I have been on a fascinating journey, diving deep into the world of Large Language Models (LLMs) and Generative AI. It is an exciting yet challenging field, bursting with potential - and, let's face it, a fair share of confusion.
|
A Framework Built from Curiosity |
Bit by bit, I began gathering insights from various sources online. Each fragment of information, no matter how small, contributed to a bigger picture. I evaluated these pieces, challenged assumptions, and asked questions. Slowly but surely, I organised them into something meaningful: my very own Generative AI Technology Stack. This stack is not just a collection of random facts. It is a structured attempt to synthesise the chaos, providing a framework that helps us understand how the technologies behind LLMs and Generative AI fit together. |
Why This Matters |
Generative AI is reshaping industries and redefining what is possible in technology. But to harness its power, we need clarity and shared understanding. My hope is that this stack serves as a helpful guide - not just for me, but for anyone exploring this rapidly evolving space. So here it is: my Generative AI Technology Stack. It is a living framework, one that I will (hopefully) refine over time as I continue learning. Stay tuned, as there's much more to come! |
![]() |
Thanks to the many fellow explorers whose shared knowledge made the creation of this stack possible. |
# 1 @ Streamlining Agent-Based Social Simulation Model Design with Conversational AI |
Credits: Translation from textbook tedium to blog bliss by ChatGPT-4o |
Unlocking the Potential of Conversational AI |
In our recent work, we explore how Conversational AI Systems (CAISs) like ChatGPT can support the design of Agent-Based Social Simulation (ABSS) models. By leveraging advanced prompt engineering techniques and the Engineering ABSS framework, we developed a structured script that guides ChatGPT in generating conceptual ABSS models efficiently and with minimal case-specific knowledge. Our proof-of-concept demonstrates this through a case study on adaptive architecture in museums, highlighting the system's ability to assist in model design while reducing the time and expertise traditionally required. |
Opportunities and Challenges in Harnessing CAISs |
While our results show promise, we also identified challenges. ChatGPT occasionally produced inaccuracies or strayed off-topic during the enacted discussions. Despite these limitations, its value as a creative and efficient collaborator in ABSS modelling is undeniable. This paper underscores the potential for Conversational AI to accelerate ABSS modelling and invites further exploration into refining and expanding these techniques. Read the full paper on arXiv to learn more about this transformative approach! |