Hello everyone, welcome to this week’s Discussion thread!

This week, we’re focusing on using AI in Education. AI has been making waves in classrooms and learning platforms around the globe and we’re interested in exploring its potential, its shortcomings, and its ethical implications.

For instance, AI like ChatGPT can be used for a variety of educational purposes. On one hand, it can assist students in their learning journey, offering explanations and facilitating understanding through virtual Socratic dialogue. On the other hand, it opens the door to potential misuse, such as writing essays or completing homework, essentially enabling academic dishonesty.

Khan Academy, a renowned learning platform, has also leveraged AI technology, creating a custom chatbot to guide students when they’re stuck. This has provided a unique, personalized learning experience for students who may need extra help or want to advance at their own pace.

But this is just the tip of the iceberg. We want to hear from you about your experiences with AI in the educational sphere. Have you found an interesting use case for AI in learning? Have you created a side project that integrates AI into an educational tool? What does the future hold for AI in education, in your view?

Looking forward to your contributions!

  • TootSweet@latte.isnot.coffee
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    I forget where I saw it now, but I ran across a story wherein a teacher gave an assignment to get ChatGPT to write an essay on whatever subject the students were learning and then the students were to write an essay on the accuracy and inaccuracy of the ChatGPT essay. I thought that was pretty genius.

    • mabcat@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      The genius move is to get ChatGPT to write the essay and the critique. I don’t even have to try this, to know the output would be better quality than a student’s own critique. From a teaching perspective the worst thing about this is the essay and critique would both be full of subtle errors, and writing feedback about subtle errors takes hours. These hours could have been spent guiding students who did work and actually have subtle misunderstandings.

      • heavy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I don’t think that’s necessarily fair or the point. Usually the point of essays are to get students to think critically about the subject, derive some conclusions and demonstrate evidence to make their points. I think the idea of having students critique an A.I driven essay begins to remove some of the “middle man” of content generation in essay writing, but still gets the student to think about the subject, gather some perspective and ideally look into evidence to support said perspective.

        To add that I don’t think the goal is to write “perfect” critiquing feedback that’s free from errors. Errors are also part of the learning process :)

  • mabcat@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    “Potential misuse” is a bit of a weasel phrase… student use of AI assistants is rampant, the ways they use them are almost always academic misconduct, so it’s actual misuse.

    Our institution bans use of AI assistants during assessments, unless permitted by a subject’s coordinator. This is because using ChatGPT in a way that’s consistent with academic integrity is basically impossible. Fixing this means fixing ChatGPT etc, not reimagining academic integrity. Attribution of ideas, reliability of sources, and individual mastery of concepts are more important than ever in the face of LLMs’ super-convincing hallucinations.

    There are no Luddites where I teach. Our university prepares students for professional careers, and since in my field we use LLMs all day long for professional work, we also have to model this for students and teach them how it’s done. I demonstrate good and bad examples from Copilot and ChatGPT, quite frequently co-answer student questions in conversation with ChatGPT, and always acknowledge LLM use in materials preparation.

    I also have a side project that provides a chat interface to the subject contents (GPT4 synthesis over a vector store). It dramatically improves the quality of AI assistant answers, and makes it much easier to find where in the materials a concept was discussed. Our LMS search sucks even for plain text content. This thing fixes that and also indexes into code, lecture recordings, slides, screenshots, explainer videos… I’m still discovering new abilities that emerge from this setup.

    I think the future is very uncertain. Students who are using ChatGPT to bluff their way through courses have no skills moat and will find their job roles automated away in very short order. But this realisation requires a two-year planning horizon and the student event horizon is “what’s due tomorrow?” I haven’t seen much discussion of AI in education that’s grounded in educational psychology or a practical understanding of how students actually behave. AI educational tools will be a frothy buzzword-filled market segment where a lot of money is made/spent but overall learning outcomes remain unchanged.

  • varsock@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    It’s been a while since I was in a formal classroom setting but as an engineer I’d assert that I’m constantly learning. So I’ll offer my perspective on AI in education for those continuing education.

    I find myself taking more risks at work and in my personal projects in experimenting with new technology and languages. AI’s shortcomings grow exponentially as technical complexities of the prompt grow linearly, but for a beginner getting their feet wet in a subject it lowers the bar considerably in entry to the topic. I found I am not plagued so much by “analysis paralysis” after reading blogs and tutorials that are written by authors with varied understanding in the subject. With a few prompts I can effectively “filter down” the topics I need to read more about to produce something useful. No more fear that “what if i missed something.”

    Then there’s the aspect of creating a tailored refresher for yourself on a class of “Stuff you have to relearn every time you have to use it” (love that comment). Or asking an AI to explain what a piece of content means. For example, if you wrote a really complex Makefile, dumping a tree of the repository and asking the AI to expand all the variables in the Makefile, I can now read what every step is doing.

    But you definitely hit the nail on the head with pointing that “it opens a door to potential misuse”. I become dependent on it for doing some tasks that I will only otherwise learn by doing. And in the context of data storage, in some ways I become less efficient and more error prone because I no longer access the knowledge I have cached in memory (my brain) and instead access data on disk (taking the time to ask an AI) that can retrieve incorrect data (data corruption, bad sectors, etc) that are difficult to catch.

    As an educational tool, I think those that behave as “AI gluttons” and overindulge in use of AI to the point of excess or greed, risk eroding their critical thinking and creativity. And those that do not supplement their learning with AI risk of being left behind by those who use it responsibly. In the same way I think AI will not replace programmers, but programmers that use AI will replace programmers that don’t.