Certainly, the 'tested, certified' bit is nice in some environments. In our case, auditing requirements are that we either use a certified software stack, or we go on our own but have to show that we're doing quick updates to every little component that feeds into it. So, for sanity purposes, we've historically gone with the standard offerings of the Linux distributions. The problem with this is that they tend to be years behind the curve. For example, most distributions have only recently adopted PHP 5.3 after having stuck with 5.1 (!). That's just not acceptable when you're trying to develop modern applications that use modern coding techniques, plus you're giving up a ton in terms of PHP performance and reliability.
Having said that, the features are quite nice, too. @Keven already mentioned the job queue. That's awesome for us, in that we can very easily offload all sorts of tasks that run asynchronously and keep the main request process flying. As an example, one of our applications creates tasks in our bug tracker whenever certain types of events happen. As this is done by web service, and the bug tracker is horrendously slow, this can take several seconds. Rather than making the users of our application wait, however, we just queue up a job and let it run in the background. Likewise, our standard e-mail class uses the job queue rather than making the user wait while our code talks to an SMTP server. And all that's not even touching the usefulness for things like generating large reports, running database integrity checks, rebuilding caches, etc., etc.
The page cache is great for those cases where you can simply cache a whole page and be done with it. We use this with our WSDLs, since we have better control than PHP's own caching controls. Likewise, the download server is wonderful for caching certain types of content, like images. And we use the data cache like a local memcached server to greatly speed up all sorts of requests by avoiding doing a query to a slow database server sitting somewhere else on the slow network.
And of course, as @André mentions, there are some very nice debugging, tracing, and event reporting features in there.
There are also some nice features for doing deployments and rollbacks, which are very important with business-critical applications. I intend to try these out someday, but for now, I'm still using the tools I put together prior to use ZS.
Now, you can get most of these features (particularly, all the caching bits) by cobbling together a variety of other tools. But, you then have to research and learn all those things, get them all installed and working together, and then maintain them all, including doing proper integration testing when something is updated. That's a lot of work and time -- time I'd personally rather spend writing code.
Having said all that, there are downsides. For one, things sometimes feel... half-baked and/or ill-conceived. For example, the data cache API returns boolean false if you try to fetch an item that doesn't exists. And, it has no function for checking if an item exists without also fetching. Guess what this means: you can't safely store a boolean value because you can't safely retrieve it. It includes a poorly documented APC compatibility layer, but trying to use the existence function from APC produces an undefined-function error.
As another example, we use Macs for our development stations, but out of a greatly misguided concern over compatibility with the ancient hardware that tends to be run by all those professional developers out there who drop thousands on PHP server software, Zend has chosen to ship the Mac version (which is for development only) as 32-bit only. So we're forced to develop an application in 32-bit that runs everywhere else in 64-bit. This caused quite a few bugs and failed automated tests in our application, which rather kills one of the core purposes of ZS, which is an identical software stack across development, test, staging, QA, and production environments. I tried to talk them into changing this, but they quickly started ignoring me.
Another big one is that the job queue can only process jobs through HTTP requests. The API is set up to allow other methods (like the much more sensible command line call), but HTTP is all that works. This forces you to tie up web server connections with tasks that, by design, tend to be long-running and thus should be taken out of web context. And, it forces you to jump through hoops to keep the world from being able to trigger your jobs by visiting a URL in a browser. It's just a stupid decision.
Other examples are the poor handling of custom events sent via API to Zend Monitor, the php-cli wrapper for the PHP binary that breaks on the Mac when triggered by shebang line, the complete (utter) lack of health and performance reporting in the cache tools (though they said this is changing in ZS 6), and the embarrassingly incomplete documentation. I could go on....
Now, those downsides, and the wasted time and resources that come along for the ride, obviously haven't outweighed the benefits for us, but for the amount of money we're spending, I definitely expect more.