Welcome to the Futurati Podcast’s “This Week in AI” for March 31st, 2023. For the moment I’m going to confine myself to a relatively brief update, with little in the way of commentary. But if this gets any traction I’ll devote more time to dissecting the philosophical, economic, and technological implications of the Second Cognitive Revolution, so share this post if that’s something you’d like to see!
The big story of course is an open letter by the Future of Life Institute claiming that it’s time to put a 6-month moratorium on experiments with powerful LLMs, signed by visionaries like Elon Musk, Steve Wozniak, and Tristan Harris. Eliezer Yudkowsky penned an opinion piece in Time Magazine saying that this doesn’t go nearly far enough, and we may need an international agreement on halting such experiments indefinitely.
In the meantime, people are using LLMs for code debugging, analytics, game programming, fiction writing, and dozens of other tasks.
Peter Doocy asks whether we should be afraid that AI will kill us all at a White House Press Briefing.
Jason Abaluck on regulating AI.
Jacy Rees Anthis writes on some key questions to consider for digital minds.
Is it time for AI rights?
“Wolverine”, a GPT-4 powered python debugger that can iteratively explain why your code is crashing.
Using AI to predict where users will look at as they engage with your designs.
AI is the future of cybersecurity.
Abacus.ai is building a tool that will give users the ability to generate knowledge on their own knowledge base.
How will plugins impact the economics of development and the ability to run LLMs locally?
Ethan Mollick compares working with Bing’s AI to working with a Ph.D. student.
Using Replit and ChatGPT to create a dashboard for a business.
GPT-4 writes a 115-page fantasy novel (which is apparently pretty good.)
Researchers at Microsoft want to give LLMs agency and volition. I’m not at all sure this is a good idea.
Databricks is building ‘Dolly’, a ‘democratized’ LLM.
Having ChatGPT recreate the classic video game pong by saying “making the classic video game pong”
Replit is partnering with Google to enhance the use of generative AI in software development.
What are the next steps for LLMs? How far can they be scaled? Sebastian Raschka weighs in.
An AI tool that works in Excel to automate tedious tasks.
Asking ChatGPT how we could stop a powerful AI from becoming a paperclip maximizer.
Making a responsive lo-fi radio station with Replit, ChatGPT, and Midjourney.
How will collective stores of knowledge like Stack Overflow be impacted by the use of LLMs?
Once spreadsheets were introduced there was talk of an apocalypse in bookkeeping jobs. What actually happened?
Prediction: in the future, every major artist will have trained a generative model on their corpus, and Spotify will dominate the space.
Richard Ngo points to an argument that LLMs can learn to reason causally.
You can now just chat directly with ChatGPT on the phone.
“Teaching Algorithmic Reasoning via In-context Learning” (if done correctly, LLMs can be taught how to do accurate quantitative reasoning.)
As I said, I’m keeping these first few editions brief. Please share it and drop me a line if there’s a change you want to see or something you think I should cover. If there seems to be a real interest in this I’ll devote more time and attention to it, so let me know if you find this valuable