Pergi ke luar talian dengan aplikasi Player FM !
Future of Science and Technology Q&A (August 16, 2024)
Manage episode 441440116 series 1692780
Stephen Wolfram answers questions from his viewers about the future of science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa
Questions include: What do you view as the best strategies for reducing or eliminating hallucination/confabulation right now? Is there any chance that we'll be able to get something like confidence levels along with the responses we get from large language models? - I love this topic (fine tuning of LLMs); it's something I'm currently studying. - The AI Scientist is an LLM-based system that can conduct scientific research independently, from generating ideas to writing papers and even peer-reviewing its own work. How do you see this technology impacting the development of Wolfram|Alpha and other knowledge-based systems in the future? - It's fascinating the difference in response from LLMs/as to how you pose your questions. - I have found that giving key terms and then asking the LLM to take the "concepts" and relate them a particular way seems to work pretty well. - How we are going to formalize the language structures arising from this microinformatization, which was capable of creating such a semantic syntax that we had not observed through structuralism? - Why is being rude and "loud" to the model always the most efficient way to get what you want if the one-shot fails? I notice this applies to nearly all of them. I think it's also in the top prompt engineering "rules." I always feel bad even though the model has no feelings, but I need the proper reply in the least amounts of questions. - AI Scientist does what you're describing. The subtle difference is that it is generating plausible ideas, creating code experiments and then scoring them–question is whether this approach can/should be extended with Alpha? - How soon do you think we'll have LLMs that can retrain in real time? - What's your take on integrating memory into LLMs to enable retention across sessions? How could this impact their performance and capabilities? - Do you think computational analytics tools are keeping up with the recent AI trends? - Would it be interesting to let the LLM invent new tokens in order to compress its memories even further? - Philosophical question: if one posts a Wolfram-generated plot of a linear function to social media, for media is math, should it be tagged "made with AI"? It's a social media's opinion probably–just curious. A math plot is objective, so different than doing an AI face swap, for example. - For future archeologists–this stream was mostly human generated. - Professor_Neurobot: Despite my name, I promise I am not a bot.
434 episod
Manage episode 441440116 series 1692780
Stephen Wolfram answers questions from his viewers about the future of science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa
Questions include: What do you view as the best strategies for reducing or eliminating hallucination/confabulation right now? Is there any chance that we'll be able to get something like confidence levels along with the responses we get from large language models? - I love this topic (fine tuning of LLMs); it's something I'm currently studying. - The AI Scientist is an LLM-based system that can conduct scientific research independently, from generating ideas to writing papers and even peer-reviewing its own work. How do you see this technology impacting the development of Wolfram|Alpha and other knowledge-based systems in the future? - It's fascinating the difference in response from LLMs/as to how you pose your questions. - I have found that giving key terms and then asking the LLM to take the "concepts" and relate them a particular way seems to work pretty well. - How we are going to formalize the language structures arising from this microinformatization, which was capable of creating such a semantic syntax that we had not observed through structuralism? - Why is being rude and "loud" to the model always the most efficient way to get what you want if the one-shot fails? I notice this applies to nearly all of them. I think it's also in the top prompt engineering "rules." I always feel bad even though the model has no feelings, but I need the proper reply in the least amounts of questions. - AI Scientist does what you're describing. The subtle difference is that it is generating plausible ideas, creating code experiments and then scoring them–question is whether this approach can/should be extended with Alpha? - How soon do you think we'll have LLMs that can retrain in real time? - What's your take on integrating memory into LLMs to enable retention across sessions? How could this impact their performance and capabilities? - Do you think computational analytics tools are keeping up with the recent AI trends? - Would it be interesting to let the LLM invent new tokens in order to compress its memories even further? - Philosophical question: if one posts a Wolfram-generated plot of a linear function to social media, for media is math, should it be tagged "made with AI"? It's a social media's opinion probably–just curious. A math plot is objective, so different than doing an AI face swap, for example. - For future archeologists–this stream was mostly human generated. - Professor_Neurobot: Despite my name, I promise I am not a bot.
434 episod
Semua episod
×Selamat datang ke Player FM
Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.