[ad_1]
Hoffman joined the system last summer, and since then has been writing his thoughts on AI models in education, the arts, the justice system, journalism and more. In the book, which includes copy-pasted text in his interactions with the system, he describes his vision for the future of AI, uses GPT-4 as a writing aid to come up with new ideas, and analyzes the answers.
A quick final word… GPT-4 is the cool new shiny toy for the AI community. There’s no denying that it’s a powerful assistive technology that helps us brainstorm ideas, organize texts, explain concepts, and automate routine tasks. That’s a welcome development, especially for white-collar knowledge workers.
However, OpenAI itself is known to urge caution when using the model and warn that it poses a number of security risks, including violating privacy, impersonating people, and generating harmful content. It also has the potential to exploit dangerous behavior that we haven’t yet encountered. That’s why I’m here. There is currently nothing to prevent people from using these powerful new models of harmful substances, and even if they do, there is nothing to hold them accountable.
Deep learning
Chinese tech giant Baidu has just released its answer to ChatGPT.
in order to. a lot of. Chatbots. The latest player to enter the AI chatbot game is Chinese giant Baidu. Late last week, Baidu introduced a new large-scale language model called Ernie Bot, which can solve math questions, write marketing copy, answer questions about Chinese literature, and provide multimedia responses.
China option: Ernie Bot (the name stands for “Mixed Representation with kNowdge”, its Chinese name is 文心一言, or Wenxin Yiyan) works particularly well on tasks specific to Chinese culture, such as explaining historical facts or writing traditional poetry. Read more from my colleague Zei Yang.
Even deep learning
Language models may be able to “self-correct” bias—if you ask them
Large language models are notorious for toxic biases due to the horrific human content they are trained on. But if the models are large enough, they may be able to correct themselves for some of these biases. Amazingly, all we have to do is ask.
[ad_2]
Source link