College professors are going back to paper exams and handwritten essays to fight students using ChatGPT::The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.

  • TimewornTraveler@lemm.ee
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    31
    ·
    1 year ago

    Can we just go back to calling this shit Algorithms and stop pretending its actually Artificial Intelligence?

    • WackyTabbacy42069@reddthat.com
      link
      fedilink
      English
      arrow-up
      56
      arrow-down
      20
      ·
      1 year ago

      It actually is artificial intelligence. What are you even arguing against man?

      Machine learning is a subset of AI and neural networks are a subset of machine learning. Saying an LLM (based on neutral networks for prediction) isn’t AI because you don’t like it is like saying rock and roll isn’t music

      • over_clox@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        26
        ·
        1 year ago

        If AI was ‘intelligent’, it wouldn’t have written me a set of instructions when I asked it how to inflate a foldable phone. Seriously, check my first post on Lemmy…

        https://lemmy.world/post/1963767

        An intelligent system would have stopped to say something like “I’m sorry, that doesn’t make any sense, but here are some related topics to help you”

        • WackyTabbacy42069@reddthat.com
          link
          fedilink
          English
          arrow-up
          18
          arrow-down
          3
          ·
          1 year ago

          AI doesn’t necessitate a machine even being capable of stringing the complex English language into a series of steps towards something pointless and unattainable. That in itself is remarkable, however naive it may be in believing you that a foldable phone can be inflated. You may be confusing AI for AGI, which is when the intelligence and reasoning level is at or slightly greater than humans.

          The only real requirement for AI is that a machine take actions in an intelligent manner. Web search engines, dynamic traffic lights, and Chess bots all qualify as AI, despite none of them being able to tell you rubbish in proper English

          • TimewornTraveler@lemm.ee
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            14
            ·
            edit-2
            1 year ago

            The only real requirement for AI is that a machine take actions in an intelligent manner.

            There’s the rub: defining “intelligent”.

            If you’re arguing that traffic lights should be called AI, then you and I might have more in common than we thought. We both believe the same things: that ChatGPT isn’t any more “intelligent” than a traffic light. But you want to call them both intelligent and I want to call neither so.

            • throwsbooks@lemmy.ca
              link
              fedilink
              English
              arrow-up
              12
              arrow-down
              3
              ·
              1 year ago

              I think you’re conflating “intelligence” with “being smart”.

              Intelligence is more about taking in information and being able to make a decision based on that information. So yeah, automatic traffic lights are “intelligent” because they use a sensor to check for the presence of cars and “decide” when to switch the light.

              Acting like some GPT is on the same level as a traffic light is silly though. On a base level, yes, it “reads” a text prompt (along with any messaging history) and decides what to write next. But that decision it’s making is much more complex than “stop or go”.

              I don’t know if this is an ADHD thing, but when I’m talking to people, sometimes I finish their sentences in my head as they’re talking. Sometimes I nail it, sometimes I don’t. That’s essentially what chatGPT is, a sentence finisher that happened to read a huge amount of text content on the web, so it’s got context for a bunch of things. It doesn’t care if it’s right and it doesn’t look things up before it says something.

              But to have a computer be able to do that at all?? That’s incredible, and it took over 50 years of AI research to hit that point (yes, it’s been a field in universities for a very long time, with most that time people saying it’s impossible), and we only hit it because our computers got powerful enough to do it at scale.

              • ParsnipWitch@feddit.de
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                1 year ago

                Intelligence is more about taking in information and being able to make a decision based on that information.

                Where does that come from? A better gauge for intelligence is whether someone or something is able to resolve a problem that they did not encounter before. And arguably all current models completely suck at that.

                I also think the word “AI” is used quite a bit too liberal. It confuses people who have zero knowledge on the topic. And when an actual AI comes along we will have to make up a new word because “general artificial intelligence” won’t be distinctive enough for corporations to market their new giant leap in technology….

                • throwsbooks@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  I would suggest the textbook Artificial Intelligence: A Modern Approach by Russell and Norvig. It’s a good overview of the field and has been in circulation since 1995. https://en.m.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach

                  Here’s a photo, as an example of how this book approaches the topic, in that there’s an entire chapter on it with sections on four approaches, and that essentially even the researchers have been arguing about what intelligence is since the beginning.

                  But all of this has been under the umbrella of AI. Just because corporations have picked up on it, doesn’t invalidate the decades of work done by scientists in the name of AI.

                  My favourite way to think of it is this: people have forever argued whether or not animals are intelligent or even conscious. Is a cat intelligent? Mine can manipulate me, even if he can’t do math. Are ants intelligent? They use the same biomechanical constructs as humans, but at a simpler scale. What about bacteria? Are viruses alive?

                  If we can create an AI that fully simulates a cockroach, down to every firing neuron, does it mean it’s not AI just because it’s not simulating something more complex, like a mouse? Does it need to exceed a human to be considered AI?

                  • ParsnipWitch@feddit.de
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    1 year ago

                    Intelligence (in a biological sense) is defined differently from how computer scientists approach describing artificial intelligence. “Making a decision based on information” is not a criteria sufficient to declare something is intelligent in a biological sense. But that’s what a lot of people (wrongly) assume when they hear artificial “intelligence”.

                    To describe an AI as intelligent, as understood in natural science, you obviously can’t use the criteria applied in computer science. There is broad consensus in biological science that animals have intelligence. Just the scope of intelligence is heavily discussed, or rather, which level of intelligence each of them reach.

                    Viruses are not considered lifeforms, btw. Naturally, there is no 100 % answer on anything in science. But that shouldn’t be confused with there being no substance to these answers.

            • sin_free_for_00_days@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              7
              ·
              1 year ago

              I’m with you on this and think the AI label is just stupid and misleading. But times/language change and you end up being a Don Quixote type figure.

        • XTornado@lemmy.ml
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          2
          ·
          1 year ago

          If it shouldn’t be called AI or not no idea…

          But some humans are intel·ligent and let’s be clear…say crazier things…

        • jarfil@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          If “making sense” was a requirement of intelligence… there would be no modern art museums.

          • over_clox@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            1 year ago

            Instructions unclear, inflates phone.

            Seriously, if it was actually intelligent, yet also writing out something meant to be considered ‘art’, I’d expect it to also have a disclaimer at the end declaring it as satire.

            • jarfil@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              That would require a panel of AIs to discuss whether “/s” or “no /s”…

              As it is, it writes anything a person could have written, some of it great, some of it straight from Twitter. We are supposed to presume at least some level of intelligence for either.

        • Jordan Lund@lemmy.one
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          1 year ago

          Inflating a phone is super easy though!

          Overheat the battery. ;) Phone will inflate itself!

      • TimewornTraveler@lemm.ee
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        25
        ·
        edit-2
        1 year ago

        I am arguing against this marketing campaign, that’s what. Who decides what “AI” is and how did we come to decide what fits that title? The concept of AI has been around a long time, like since the Greeks, and it had always been the concept of a made-made man. In modern times, it’s been represented as a sci-fi fantasy of sentient androids. “AI” is a term with heavy association already cooked into it. That’s why calling it “AI” is just a way to make it sound high tech futuristic dreams-come-true. But a predictive text algorithm is hardly “intelligence”. It’s only being called that to make it sound profitable. Let’s stop calling it “AI” and start calling out their bullshit. This is just another crypto currency scam. It’s a concept that could theoretically work and be useful to society, but it is not being implemented in such a way that lives up to its name.

        • GenderNeutralBro@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          1
          ·
          1 year ago

          Who decides what “AI” is and how did we come to decide what fits that title?

          Language is ever-evolving, but a good starting point would be McCarthy et al., who wrote a proposal back in the 50s. See http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

          Techniques have come into and gone out of fashion, and obviously technology has improved, but the principles have not fundamentally changed.

        • BigNote@lemm.ee
          link
          fedilink
          English
          arrow-up
          11
          ·
          1 year ago

          The field of computer science decided what AI is. It has a very specific set of meanings and some rando on the Internet isn’t going to upend decades of usage just because it doesn’t fit their idea of what constitutes AI or because they think it’s a marketing gimmick.

          It’s not. It’s a very specific field in computer science that’s been worked on since the 1950s at least.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Maybe machine learning models technically fit the definition of “algorithm” but it suits them very poorly. An algorithm is traditionally a set of instructions written by someone, with connotations of being high level, fully understood conceptually, akin to a mathematical formula.

      A machine learning model is a soup of numbers that maybe does something approximately like what the people training it wanted it to do, using arbitrary logic nobody can expect to follow. “Algorithm” is not a great word to describe that.

    • Venia Silente@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      6
      ·
      1 year ago

      Please let’s not defame Djikstra and other Algorithms like this. Just call them “corporate crap”, like what they are.