Plenty of database queries use aggregates to summarize large amounts of data into a more usable form. You might run queries to show trends over time and then summarize them via large time scales. With good workload analysis techniques, you can use the same sort of ideas to analyze how the server itself is running. Workload analysis improves tuning, forecasting, and capacity planning.
If your PostgreSQL database server is really busy, but you're not sure exactly what it spends its time doing, consider yourself part of a large club. Workload analysis is a process already approached as a more formal one in some other database systems. And PostgreSQL 9.2 adds some new approaches for collecting the information needed to start characterizing your workload. But the main thing that's changing is how feasible it is to collect the data, not the sort of information that comes out. You can get a head start approaching workload analysis as its own important process, beyond just simple query log analysis, even with the PostgreSQL version you already have deployed.
Query logging, saving information about the statements that take a long time to execute, is a useful way to look at small amounts of data about your server's workload. Workload analysis takes that data and builds a larger picture about your server's health from it. PostgreSQL 9.2 makes it easier than ever to collect slow query information, but you can start workload analysis in any version.
Topics covered will include:
Slow query logging options
Tuning the minimum duration defining "slow"
Workload Analysis practices
Log-based Workload Analysis using tools like pgFouine
Query normalization
Enhancing pg_stat_statements to handle normalization
Techniques for tracking long-term workload trends