(How) Will Generative AI Change Education? 

AI taking over education | Prompted on Bing

Key Takeaways

  • Generative AI has many exciting benefits for the education sector (among many other industries) but also poses opportunities for misuse.

  • Educators must think about whether to try to control or adapt to the new technology to minimize cases of misuse.

  • Suggested solutions such as watermarks and classifiers (that identify AI-generated content) are flawed, and there is no obvious solution to this challenge.

  • Course policies that ban products like ChatGPT would be difficult to enforce and also deprive students the opportunity to learn about a powerful technology that will be a significant part of society going forward.

  • Therefore, instructors must adapt by emphasizing student reasoning and step-by-step thinking in coming up with answers or testing student mastery of course concepts. Reasoning is where humans shine and generative AI often fails making it a weakness that educators can exploit to reduce the likelihood of misuse.

  • New updates to OpenAI show promise for crafting a game-changing high level language that will allow for more unstructured queries coupled with structured functions to give structured outputs (as opposed to the rigid structured query requirements of existing programming languages), making it far more user-friendly.

  • With no easy answers, it will be important to educate students from primary school to higher education on how AI and programming languages work. This will help to maximize the benefits of the new technology while also building critical thinking, creativity, and reasoning skills.

Quick Intro

With the rapid rising trend of individuals and enterprises adopting generative AI products comes concerns of people misusing these technologies in the absence of ways to assess and safeguard against such use cases. As Google Chief Decision Scientist Cassie Kozyrkov says - ChatGPT was a user experience (UX) revolution. Rather than users dealing with AI on the backend (in applications like face unlocking, or movie selection), OpenAI changed the way users interact with the technology. Now users can directly interact with AI and use these powerful technologies creatively. While there are tremendous benefits that we are already seeing, such as helping generate creative ideas, extract information from unstructured data, etc. - cases for misuse abound.

The Generated Content Conundrum

You might have noticed in the header image that there is a watermark in the lower left corner showing the method by which the image was AI - generated. If you look closely, you will see the Bing icon. While it is possible to have watermarks for AI generated images (despite ways to get around these like simply cropping the picture to remove the logo), it is even harder to use watermarks for AI generated text. Where do you even draw the line? If the output is a few words, how is it possible to watermark this, should there be a threshold text– say ~500 words– of AI - generated content above which you should have a watermark? And let’s say you do define such a threshold; how do you add this watermark to the text created? If it is just a few words, can’t you just delete these words? 

Clearly, the watermark solution is lacking. But let’s talk of another solution– a classifier that tells you if some text is human - generated vs. AI - generated? This method also has pitfalls for similar reasons as above. If the AI gives a simple yes/no answer, does this now get classified as AI - generated? You can see how a classifier doing this might be no better than a random number generator. 

I don’t think there is any obvious solution to this. Instructors can completely ban the use of generative AI tech like ChatGPT and BARD as part of their course policy; but in my opinion this is a false solution, as students might still use it undetected, and it will deprive those that do follow the rules of fundamental opportunities to learn about a technology that will be a significant part of society in the near future.

Instead, a better approach would be to emphasize student reasoning in coming up with answers or testing their mastery of course concepts. In fact, reasoning and step-by-step thinking is where human intelligence shines and Generative AI fails.

As an example, if you ask ChatGPT the following math question below in one-shot, it falsely states that the student got the right answer.

Note: this example was taken from Andrew Ng’s deeplearningAI course on prompt engineering.

ChatGPT prompt giving wrong answer | DeepLearning.AI

But now if you ask ChatGPT to think step-by-step, provide you the answer, and check this next to the student’s solution, it gets that the student solution was wrong:

ChatGPT prompt giving the right answer after step-by-step prompting | DeepLearning.AI

In my opinion, if the student was smart or knowledgeable enough to format a well-thought out ChatGPT prompt like the second one, they should get the full grade (and maybe even a few extra points!), since they not only learned the relevant concepts to answer the problem, but also integrated ChatGPT and learnt something new in the process that might be helpful for them no matter what career path they pursue.

Fundamental Shift In Programming

On June 13th, 2023, OpenAI released some exciting updates  -  cheaper ChatGPT/GPT-4 models, a cheaper embeddings model, and a 4X increase in context length. But apart from this, there was a new innovation that might just completely revamp the way we write code to interact with machines. The way we code is significantly more rigid than the way we write. The same content can be conveyed very differently by different authors  (or even by one author) at different times. If I were to delete all the above content and write it again, it would likely be different word to word, but convey the same ideas. Whereas, writing code is much more prescriptive.

OpenAI’s function calling updates herald a new higher-level of programming abstraction, sitting on top of programming languages like Python, C++, Java, SQL, etc. I’ve shown examples in a GitHub package where these functions blur the distinctions between unstructured and structured queries. Basically, unstructured queries coupled with structured functions give structured outputs. Also, unstructured prompts abstracting SQL queries can be used to extract information from databases. The image below conceptualizes a generative programming language, sitting on top of other commonly used languages like Python, C++, JAVA, etc. that could very well be the most popular language in the future.

Conceptualizing a higher generative language | Skanda Vivek, modified from this source.

As an example, let’s query a database linked to my app AnswerChatAI powered by ChatGPT that I built for users to ask questions about websites/small strings of text. This information is stored in a database containing information about every question asked, website link/text over which the question is asked, and the answer generated (visualize an excel file with multiple columns if that is more convenient). 

Let’s say you want to figure out the five most common questions, and the number of times asked. If you use a database language like SQL, the query you ask is as follows (where questions is the column containing all questions, and table_query is the associated table):

This looks pretty clunky and requires familiarity with SQL queries. However, you can ask the same question passed through a custom OpenAI function calling script as follows:

The response is:

As you can see, the above query is more natural, and captures the user’s intent.

Final Thoughts

Generative AI is sure to revamp the future of education, and this is currently happening as we speak. I gave a talk at the National Institute of Standards and Technology (NIST) and this was one of the most pressing questions that came up after the talk. The consensus is that there is no easy solution to this - apart from adapting the way we teach and learn. The good news is that we have a chance to integrate AI now into almost every course all the way from elementary school to higher education. Echoing Andrew Ng and Andrea Pasinetti’s blog -  it is important that schools teach AI and coding skills to every child. I’m excited to see and be part of the upcoming changes that (hopefully) change education for the better.

Skanda Vivek

Senior Data Scientist in the Risk Intelligence team at OnSolve where he develops advanced Artificial Intelligence based algorithms for rapidly detecting critical emergencies through big data.

Read Skanda’s Bio

Previous
Previous

Converging on an Understanding: Emerging Tech, Our Communities & Delicious Desserts

Next
Next

“Spill on Aisle Four”: Who is responsible for cybersecurity risks in the public sector?