A: [@AIbot] please provide a comprehensive list of all the areas in an LMS that can trigger emails
[@AIbot] Let me look that up for you! Give me a second...
B: Ah, LMS might be a blind spot for it. Might have stuff from Zendesk, lets see
[@AIbot]
Answer: .The LMS can trigger emails for the following areas:
* Password resets
* Ad-hoc emails sent by organisation administrators, can be targeted to certain users or a group
* Ad-hoc email sent by Learning Experience administrators, can be targeted to certain users or a group
* Invite emails sent by Learning Experience administrators
* Scheduled Learning Experience emails as configured by administrators
* Learning Experience digest emails (activity summary) as configured by administrators
* When a user enrols on a course
* When a user completes a course
* When a user views a course
* When a user is assigned a certification
* When a user completes a certification
* When a user is assigned a program
* When a user completes a program
* When a custom xAPI statement is received
More info:
Functionality.csv [ and links to internal guidance]
B: Well what do I know
Less than my creation apparently
A: Wow!
I'm just comparing a bot (sophisticated auto complete) answer with the one provided by humans... C asked the Q in October. D responded: "If you go to manage notifications in your profile that should provide a list. Worth a look, just add /message/notificationpreferences.php after the customers LMS URL."
Interesting to note the difference context makes at the process level, i.e. which one (both potentially useful) empowers the human asking rather than making the human redundant, replacing the need for the human to function and making them less likely to be able to do their nuanced/human thing next time. That difference is my measure for adoption of a technology.
B: Yes, and in a dream world we'd actually take the human answers to these questions and feed them back into the system.
You might hear the term RHLF - and that is exactly that concept: Reinforcement Learning from Human Feedback (although technically I'm not "training" anything, just providing more search context)
A: Disagree that it must be a "dream world." We'd just have to decide to do it that LP way we love to loudly love--create a more humane/ethical and thereby more competitive product.I'd argue that RLHF is both in progress everywhere and unavoidable, just at the subconscious level--ignoring, explicitly understating, inadequately describing the human influence in the process while swooning over the new advances. This will only lead to baking in the things we fear, rather than accounting for them in planning new structures, and then perpetuating them at scale because they will appear to be the fault of somebody multinationally powerful and so mustn't be acknowledged.
PaLM isn't great yet, but who knows about tomorrow.
TechCrunch
An open source equivalent of OpenAI's ChatGPT has been released. But it's not trained with the necessary data.
Written by Kyle Wiggers Est. reading time 4 minutes Dec 30th
The counterweight to the tremendous energy being devoted to developing functional AGI is the anticipatory but no less tremendous energy being spent fait accompli catastrophizing about humanity being crippled by crutches and Replaced instead of technology being built to help humans learn to be more humane. The functional dilemma, psychologically speaking, is that all of that dark fantasy (from Robocop to the almost-had conversations here about the very real pitfalls of AI) will likely continue to seem fantastical to most ("oh, its just those knee-jerk luddite curmudgeons") and not be captured as critique for improvement.
B: Oh sorry, by dream world I meant having the time to pull all the Slack conversations and parse them into a use able format! The original idea for this bot actually started there
.
#AI #AGI