The idea of CPAN::Reporter is great: take advantage of all those daily uses of the cpan shell to collect reports from a large network of users. I tried several times to enable CPAN::Reporter but I always found that it delayed just enough of my workflow that I found it a nuisance. After each test phase, it would start a SMTP connection and send the report. Those 3 or 4 seconds where a bit too much for me.
Since May I've been using PGP Whole Disk Encryption on my laptop and his Time-Machine external drive. Almost 6 months later I can report that it works great, you don't notice it at all. Strongly recommended, if you need this sort of thing. But there are no completely secure software-only solutions, and its good to know the limitations, like the "Evil Maid" Attacks on Encrypted Hard Drives. The comments on the article are also worth a read.
(Update: I've pushed my code, including three new scripts, to the nfsd_report_bench/ directory on my examples repository. See below for some clarifications based on comments I received). A former colleague of mine at PT had a small reporting problem, and he ended up comparing several languages for the job: C, Perl, PHP, and Python. I was curious about the results, so I took the latest version of the Perl script that he was using and set off to work.
I'm playing with a new command for the CPAN::Shell: 's' for search on http://search.cpan.org. It takes a single argument (can be a module, distribution, bundle or author name), checks the CPAN indexes to see which type it is, creates the proper URL for it at search.cpan.org and opens your browser with it. The last bit, opening a browser with it, is very very immature. Right now it only works on Mac OS X.
I've uploaded a small module to CPAN, Browser::Open (give it a couple of minutes to show up). It does one simple thing: given a $url, it opens the default browser with it. The difficult part is deciding how to open the "default browser". On Mac OS X, this is easy: just execute the open command. On Windows, there is a start command that should do the trick, but I'm not a Windows user so I cannot test this.
I did the research on this a month ago and I forgot to write it down, so I just spent another hour doing it again. I should know better by now. Anyway, you can download the Java 1.6 update for Mac OS X Leopard from the Apple Software site, but its only 64-bit. I do have a desktop that is 64-biT, unfortunately my Macbook Pro laptop is only a 32-bit Core Duo.
I've used Gearman on and off in the past but for a new project, I've decided to explore some features I rarely made use of previously. Most notably, the unique ID that clients can submit with each job. Let me just clarify some behaviors for non-background jobs: the worker will execute the job until it finishes even if the client that submitted it dies;if the client dies before the job is passed on to a worker, it will be removed and will never execute.
This all started with an article by Marcel Gruenauer "hanekomu", "Repeatedly installing Task::* distributions". What he wants is a way to tell CPAN this: "install the latest versions of my dependencies". His solution wont work unfortunately. The code that he gives us will prevent the Task:: module from being installed but it will not guarantee that the latest version of the prereqs will be installed in the following runs. the reason is simple: if you don't ask for a specific version of your prereqs, CPAN will accept any version, so it will only install each prereq once, the first time.
I've uploaded to PAUSE release 0.8 of AnyEvent::Mojo. It should be on your local CPAN mirror in a little while. This was a long time coming unfortunately, and I accumulated FAIL test reports on CPANTS, but its here now. Given that it uses the latest Mojo release, it supports HTTP keep-alive and pipelining, chunked-encoding and 100-Continue requests. Although the test suite passes, I'm not fully confident on the pipelining code. My next step is to write a client with a slow network reader to exercise some corner cases of that part of the code.
For some time now, I had the Github gem installed (if you want to know more, I suggest a old blog post about the Github gem). This gives you a small gh script that interface to Github APIs and make common operations like creating repositories, cloning, and fetching other repositories in the project network easy and fast. But I'm a lazy bastard, and the lack of a bash completion script was getting on my nerves.