Search Results
#aikiddo
#CDWAOC Aikido Oak Cliff Dojo
#CDW - Conflict Done Well #CDWclass #CDWdev #CDWAOC #CDWeg #CDWtxtbk
#AGI Artificial General Intelligence
#myth myth
A: [@AIbot] please provide a comprehensive list of all the areas in an LMS that can trigger emails
[@AIbot] Let me look that up for you! Give me a second...
B: Ah, LMS might be a blind spot for it. Might have stuff from Zendesk, lets see
[@AIbot]
* Ad-hoc emails sent by organisation administrators, can be targeted to certain users or a group
* Ad-hoc email sent by Learning Experience administrators, can be targeted to certain users or a group
* Invite emails sent by Learning Experience administrators
* Scheduled Learning Experience emails as configured by administrators
* Learning Experience digest emails (activity summary) as configured by administrators
* When a user enrols on a course
* When a user completes a course
* When a user views a course
* When a user is assigned a certification
* When a user completes a certification
* When a user is assigned a program
* When a user completes a program
* When a custom xAPI statement is received
Functionality.csv [ and links to internal guidance]
I'm just comparing a bot (sophisticated auto complete) answer with the one provided by humans... C asked the Q in October. D responded: "If you go to manage notifications in your profile that should provide a list. Worth a look, just add /message/notificationpreferences.php after the customers LMS URL."
.
#AI #AGI
B:
OpenAI's ChatGPT is a system trained on heaps of online text, which has learned to predict the likely next word of a sentence, giving eerily human-like writing in response.However, Gary Marcus is an emeritus professor of psychology and neural science at N.Y.U., and he argues that what these systems are doing is cutting and pasting phrases and filling in the gaps. They do not actually understand what they are saying, and Marcus thinks that creating powerful systems that do not understand what they are telling us is not going to work out the way we hope it will.Though humans also use pastiche in speech, Marcus argues that the major difference is that humans have internal models of the world, for example, being able to close their eyes and still know where things are, representing character relationships from a movie accurately.Furthermore, these A.I. systems are not reliable or trustworthy as they don't always know the connections between the things they are putting together, therefore Marcus suggests that instead of trying to create massive networks to believe in, we should focus on understanding.
.
A Skeptical Take on the A.I. Revolution
The Ezra Klein Show
-
- Society & Culture
The year 2022 was jam-packed with advances in artificial intelligence, from the release of image generators like DALL-E 2 and text generators like Cicero to a flurry of developments in the self-driving car industry. And then, on November 30, OpenAI released ChatGPT, arguably the smartest, funniest, most humanlike chatbot to date.
In the weeks since, ChatGPT has become an internet sensation. If you’ve spent any time on social media recently, you’ve probably seen screenshots of it describing Karl Marx’s theory of surplus value in the style of a Taylor Swift song or explaining how to remove a sandwich from a VCR in the style of the King James Bible. There are hundreds of examples like that.But amid all the hype, I wanted to give voice to skepticism: What is ChatGPT actually doing? Is this system really as “intelligent” as it can sometimes appear? And what are the implications of unleashing this kind of technology at scale?
Gary Marcus is an emeritus professor of psychology and neural science at N.Y.U. who has become one of the leading voices of A.I. skepticism. He’s not “anti-A.I.”; in fact, he’s founded multiple A.I. companies himself. But Marcus is deeply worried about the direction current A.I. research is headed, and even calls the release of ChatGPT A.I.’s “Jurassic Park moment.” “Because such systems contain literally no mechanisms for checking the truth of what they say,” Marcus writes, “they can easily be automated to generate misinformation at unprecedented scale.”However, Marcus also believes that there’s a better way forward. In the 2019 book “Rebooting A.I.: Building Artificial Intelligence We Can Trust” Marcus and his co-author Ernest Davis outline a path to A.I. development built on a very different understanding of what intelligence is and the kinds of systems required to develop that intelligence. And so I asked Marcus on the show to unpack his critique of current A.I. systems and what it would look like to develop better ones.
This episode contains strong language.Mentioned:
“On Bullshit” by Harry Frankfurt
“AI’s Jurassic Park moment” by Gary Marcus
“Deep Learning Is Hitting a Wall” by Gary Marcus
Book Recommendations:
The Language Instinct by Steven Pinker
How the World Really Works by Vaclav Smil
The Martian by Andy Weir
Thoughts? Email us at ezrakleinshow@nytimes.com. Guest suggestions? Fill out this form.
You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.“The Ezra Klein Show” is produced by Emefa Agawu, Annie Galvin, Jeff Geld, Rogé Karma and Kristin Lin. Fact-checking by Mary Marge Locker and Kate Sinclair. Original music by Isaac Jones. Mixing by Jeff Geld and Sonia Herrero. Audience strategy by Shannon Busta. The executive producer of New York Times Opinion audio is Annie-Rose Strasser.
.
C: https://zapier.com/apps/jira-software/integrations/jira-software/1158627/add-acceptance-criteria-as-comments-to-new-issues-in-jira-using-openai
.
D: Today’s reflection:
10 comments
A: Taking a poll. My 18 yr old son wrote a series of [chatgpt] prompts resulting in an “A” on his psychology paper. Note: assignments did not include using chatgpt. ShouldI be Proud or Disappointed? Side note: he starts university in the fall as an engineering major.
B: I've been encouraging my daughter to use it for revision but also been clear that she needs to fact check and do her own analysis. It's a tool like any other and the reality is they will use it - it's up to education to adapt. Most of my daughters GCSE assessments are exams so ultimately she'll need to prove herself in a very controlled environment. ChatGPT can't help her once she's in the exam hall!
C: Sorry am I missing something or are you saying he used the output of ChatGPT and submitted it as his own work? He can be kicked out of university for that!
A: He isn’t in uni yet, but yes that is what I am saying. But he also effectively use a tool to produce the assigned work so the end result is in fact his work. Spell check, grammar check, dragon dictation now chatgpt- aren’t they all tools???
C: Spelling and grammar checking is automated checking of work you produce. Dragon dictation is just a simple speech to text. In those circumstances the ideas come from him and it’s all his own work. Honestly in my opinion this should have been reported to the school. He’s committed plagiarism based on what you’ve described
A research aggregator for the experiential study of narratives related to Artificial Intelligence and Artificial General Intelligence #AGI