For now at least, each snapshot is consumed in isolation — there is no built-in way to look at data across snapshots. Within a snapshot, data is organized in SQL tables that the web app lets you interact with. Also, all the data in a snapshot can be downloaded as a SQLite database file, and queried on your machine however you see fit. There are two types of tables: function tables and derived tables.
A function table is created for every function in your spec — i.e. every function for which any variables/expressions are to be captured. These tables get a row for every stack frame (across all goroutines) that corresponds to the table’s function. Every variable/expression to be captured is a column in the table. If the value of one of the expressions is a Go struct, then the corresponding column will contain JSON documents. If a variable is not available to be collected on a particular frame (because it is no longer alive at the program counter where the snapshot has caught the respective frame, or because the compiler has optimized it away or the debug information is faulty), the respective column will be NULL.
Besides the columns corresponding to variables/expressions, function tables can also have “extra columns” defined by SQL expressions evaluated on top of the variable columns. A common thing to do is to use a JSON Path expression that extracts a field from a variable into its own, dedicated column. For example, if there’s a variable called foo being captured of struct type (resulting in a column also called foo), an extra column with the expression foo->bar.baz will extract the respective field from foo’s JSON.
A derived table is defined by a SQL query; the query has access to the function tables and to all the derived tables previously defined. The query is arbitrary — it can join, filter and aggregate data as it sees fit. For example, you might imagine a derived tables that joins together data from multiple frames of a single goroutine (e.g. putting together information about a network connection known by a function towards the bottom of the stack) and information about the current query being served on that connection (known by a function down the stack). Or, you might imagine a table that joins data across goroutines — for example producing a report about which operation is blocked on which other operation.
The point of the derived tables is to raise the level of abstraction that you’re operating on when debugging your service: start from high-level information that someone has already aggregated for you instead of looking at raw goroutine data.
An exciting Side-Eye feature is around linking of table rows. You can define pairs of tables (and also specifically pairs for frames) whose rows should be linked based on common values in a set of columns. To use CockroachDB as a good example, SQL queries execute within transactions, which are identified by a transaction id. Transactions can hold database locks, which can block other SQL queries. A query blocked on acquiring a lock naturally knows the transaction id of the lock holder. In Side-Eye, we can link (frames belonging to) the blocked function with frames corresponding to executing the transaction with the same id. If we do this, when looking at the goroutine corresponding to a blocked query, we can easily navigate to the goroutine corresponding to the transaction holding the lock and see what it’s currently doing. This can be very effective during an investigation, and it’s something that’s very hard to do without Side-Eye.
To “raise the level of abstraction” of debugging sessions even more, we’re prototyping a Grafana plugin that allow the creation of dashboards powered by Side-Eye data. The idea is to click and drag your way to nice-looking web pages that present and summarize our SQL tables, drawing attention to unusual things. For example, one can image a dashboard listing the oldest active queries in the system, coloring the ones that are older than some threshold in red.