It’s no secret that the so-called “Internet of Things” isn’t about the Things, but the data. And it is a lot of data. Every day, more of the physical world —manufacturing operations, food production systems, trains we commute on — is connected to the Internet and automated, creating more and more streams of sensor data.
Multiply the millions of things by the amount of data per device, and you get an exponentially growing torrent of information being used to make better business decisions, provide better end user experiences, and produce more while wasting less.
Most engineering teams (including ours, early on in our company history) working on these initiatives end up storing all of this data in a “platform” or in multiple databases connected with different queues and pipelines, with metadata in a relational database and time-series data in a NoSQL store. Yet each of these databases operate differently; running a polyglot database architecture adds operational and application complexity that’s unnecessary and relying on 3rd party platforms can be limiting and expensive.
You don’t need to do this.
In this talk, I’ll show you how to keep all of your IoT relational and time-series data together in PostgreSQL (yes, even at scale), and how that leads to simpler operations, more useful contextualized data, and greater ease of use. We’ll also highlight other awesome PostgreSQL features relevant to IoT, like query power, flexible data types, geospatial support, and a rich ecosystem of tools and connectors, which basically allow you to roll your own IoT data platform.