Aaron J Mackey
Tue, 12 Dec 2000 14:54:43 -0500 (EST)
On Tue, 12 Dec 2000, Jason Stajich wrote:
> Checked in new modules Bio::DB::WebDBSeqI, Bio::DB::NCBIHelper
> which provide common functionality for connecting to Webbased Sequence
> Bio::DB::GenBank, Bio::DB::GenPept, Bio::DB::SwissProt, t/DB.t were all
> updated to migrate to this new code.
Thank you for doing this, we'll all appreciate it greatly.
> use Bio::DB::GenBank;
> my $db = new Bio::DB::GenBank;
> $db->ua->proxy('protocol', 'hostname');
Excellent interface decision (to just use UserAgent's), I think.
> Also, about temporary files. I am using File::Temp, which behaves
> wonderfully on my solaris machines. I'd appreciate those with different
> architectures testing out and letting me know if we are having any
I've been pondering how to use UserAgent's callback mechanism to implement
"What We Really Want". You could do it if you didn't mind forking and
using shared memory (via IPC::Shareable or somesuch): have one process
which executes the request with a callback that captures the data for one
sequence, stores it away in shared memory, and then loops/waits until that
sequence is used (the memory is cleared), after which it again collects
enough data for another sequence, loops/waits etc. The other process
implements next_seq(), and grabs the sequence data from shared memory,
clears shared memory, and builds the Seq object. next_seq() would
potentially have to wait until the shared memory sequence data is marked
"ready", and there's other timing issues you'd need to keep track of but
it wouldn't be that hard.
Of course this wouldn't work anytime soon on Win32 or Mac ports. There
might be other ways of doing IPC to get around using shared memory
(bidirectional communication between the two processes in a server/client
mode, etc), but the idea is the same: a "server" process which reads
enough data for one sequence and then stalls until another is requested.
Coding the beast is left as an exercise ;)