NVIDIA recently announced an updated version of its experimental AI chatbot, ChatRTX. This version offers new features designed to enhance the user experience and functionality for RTX GPU owners.
ChatRTX has gained an exciting new capability - support for Google's Gemma model. Gemma is an AI system engineered to run on high-performance desktop and laptop computers. Its power comes from its ability to take advantage of the impressive processing power of NVIDIA's RTX graphics cards. With Gemma on board, the chatbot can tackle more complex queries, engage in deeper discussions, and provide responses that feel increasingly human-like.
In addition to Google's Gemma model, ChatRTX now supports a diverse range of AI models, including ChatGLM3, an open bilingual (English and Chinese) large language model, as well as the updated Mistral 7B and Llama 2 models.
ChatRTX's voice queries are also getting updated with Whisper, an advanced speech recognition system. It enables users to interact with the AI using voice commands, eliminating the need for manual text input.
The tentpole feature of ChatRTX is that you can train it on your own local content from your computer. You can point it to a folder with files (PDF, TXT, DOC/DOCX, JPG, PNG, GIF, and XML formats are supported) and then ask it questions about that content, which is particularly useful for researchers, students and professionals who need to quickly access and extract information from large volumes of data.
The benefit of running ChatRTX locally is that everything you feed it stays private on your computer. It doesn't send your data to the cloud. What's more, handling local files means that ChatRTX works faster than cloud-based AI assistants like Microsoft Copilot or ChatGPT. The only downside is that it doesn't have access to as much data as those cloud-based systems.
If you want to use the updated ChatRTX, it's actually free to try! But beware—it’s an 11.6GB download. Click here to get your hands on NVIDIA's AI chatbot.