Redpanda is a [[Apache Kafka]] compatible event streaming platform. No Zookeeper, no JVM, and no code changes required. Use all your favorite open source tooling. It has many similarities to [[Amazon Kinesis Data Streams]]. It uses the [[Raft (algorithm)]] consensus algorithm. > If we’re tracking the process of a new order in our system, here's how it is done in our system, in a nutshell. First, your order goes through a system of gateways that receives it from your browser/app. From there, the gateways send Redpanda messages to our risk management engine, to see whether your order is valid or not. Then the risk engine sends another message with Redpanda to our matching engine, where trades are processed. Next, it sends another Redpanda message to another engine in order to settle the balance transfer to your account. Lastly, another Redpanda message is sent to the gateways in order to inform you about your order. In a single flow, we use 5 different engines all of which are time sensitive, thus we thought Kafka would solve this need. > Imagine you’re running a large coffee beans retailer, with a warehouse full of various grades of beans. One day, there is a spike for demand, where everyone buys every single variety of coffee. In a short time, chaos ensues in the warehouse team, not knowing what kind of beans are left. All of a sudden, hundreds of your clients ordered the grade A coffee at the nearly the same time, with your team not knowing who would have the right for the limited amount of those grade A beans. These kinds of problems are even more profound in large-scale, big-data style companies, where billions of data are generated every second from some source and need to be further processed downstream by another service, such as the order and warehouse in our previous example. The question is, how do companies track and deliver these data in time, without any attenuation along the way, and with precision? > Technically speaking, there are some methods to try and send data over, the simplest of which is using manual database, either through pen-and-paper, or some kind of spreadsheet applications then sending it through snail mail or email. This has worked since the beginning of society, however there are some glaring problems besides the obvious speed of the transfer of information; data loss, scalability. > Lost messages may well have been the cause of defeats in wars or dollars lost in business, and unless the sender makes sure to manually add redundancy such as sending two messages at the same time, it becomes a potent downside. Sending two, however, may be very expensive and impractical. In addition, trying to send a million spreadsheet files to dozens of recipients in a minute is also impractical, the manpower needed to analyze and send the files are monstrous indeed, and the system might also crash, and thus our first problem. Being able to handle huge amounts of information and scale accordingly is also something that most data-based companies strive for.