Accessing DBI inside those loops is not a straight forward thing. Most solutions involve forking worker threads and using pipes to communicate between my script and those workers. There are a couple of components for POE that do most of the work out of the box, like POE::Component::EasyDBI, but still, it feels a lot like an hack.
Danga::Socket loops, I've been working with two "simple" solutions:
- split the work between sync and async tasks, using disk-based storage to move work from one side to the other;
- use HTTP-based REST web services.
There are two more solutions that might work now. The first is the amazing DBD::Gofer. I haven't played with it yet (look over the Tim Bunce presentation at CPAN to get an overview) but it simplifies the client side of things that it might just be possible to tweak it into a async DBD driver. The DBI API would have to be slashed a bit, I don't think it has a async version.
Gofer is nice, but will still require a HTTP server for the Gofer servers. And if I have a HTTP server, I might prefer to have a higher level API that can also group some queries in a single call, some of them could even involve a transaction that Gofer does not support.
The other solution is to use Gearman. Its fast, and seems to have all the niceties for scale (multiple workers, multiple managers). But it is not reliable, at least not until the client decides to make it so with code.
All in all, I think both solutions are good, and you can even use Gofer for some things, and Gearman for others. Heck, you can even use Gearman as back-end for Gofer.
For now, I think I'll try Gearman, it seems less work, and I'm extremely lazy. But I'll get back to Gofer soon. I would love to see a asyncronous DBI API, and
DBD::Gofer might just be the door.