Large Language Models (LLMs) have now evolved to include capabilities that simplify and/or augment a wide range of jobs. As enterprises consider wide-scale adoption of LLMs for use cases across their workforce or within applications, it’s important to note that while foundation models provide logic and the ability to understand commands, they lack the core knowledge of the business. That’s where fine-tuning becomes a critical step.
Realtime data is rapidly becoming a standard in many consumer applications. From responsive chat applications to low latency financial applications, nobody wants to refresh their browser for new data. With lots of data bouncing around Kafka behind a firewall, it begs the question of how you can serve this information to your users without sacrificing on latency. Ably provides a seamless way to serve this data to your end users devices, globally, through a direct integration with Confluent Cloud.
To update a document in MongoDB, I used to fetch it, update the values, and save back the entry. I would question the need for an update method. Looking back, it's evident that performance optimizations were hardly a concern when working on a personal project. Working with a larger dataset is a whole different story, though. This is where no-code tools can't help. In this article, I'll share some of my learnings when it comes to working in MongoDB with millions of documents.