I have just found an interesting thing in my test suite.
The other day, I had some silent failures since I was capturing the output, but not checking the error code, of some commands.
my $output = `/some/command/which/normally/works -params`;
Since the output was unimportant, but the command working was, I thought I'd switch as follows
my $rc = system(q{/some/command/which/normally/works -params});
if ($rc != 0) {
croak q{Meaningfull message};
}
All seems fine, until I run the test suite.
t/20-mytests.t ..
1..9
ok 1
ok 2
ok 3
ok 4
ok 5
ok 6
ok 7
ok 8
00000000ok 9
Failed 1/9 subtests
Test Summary Report
-------------------
t/20-mytests.t (Wstat: 0 Tests: 8 Failed: 0)
Parse errors: Bad plan. You planned 9 tests but ran 8.
So what is wrong here.
I don't know the ins and outs, but after a bit of debugging, it is that the system command under the test framework causes the TAP not to count the test, regardless of it passing or not.
Solution to this:
Go back to
my $output = `/some/command/which/normally/works -params`;
if ($CHILD_ERROR != 0) {
croak q{Meaningfull message};
}
Everything is now fine again.
Tuesday, 29 September 2009
Saturday, 26 September 2009
Sometimes there just isn't enough
I'm starting to get confused. (OK, starting is wrong). What is the problem with more error/debug than less.
I mean, sure, your average user is not going to have the foggiest idea what your error information is, and so a nice helpful message is good. But what about in a system,(which has many many many tests) where the average time to run in production takes 1000 times longer than any of your tests. And you get for Joe User
HI,
I received this error message:
An error occured in processing your request. Please let us know at line 360 in something.pl
Could you do something about this. I had been running this process for 10 hours.
Cheers
Joe
This is great, except, I know absolutely nothing further. Joe, understandably, reported the error he got (all of it!) and so thinks he has been helpful, but when you look at line 360, it's a croak of an eval block, where all your code is in multiple modules running inside it. AAAHHHH!
From looking around (and I may only have looked at a relatively small subset) people seem to think that letting the user see all of the error stack is a bad thing. Why? Speaking with some users, they don't understand the output, and would like an additional friendly message, but want to be helpful. They just don't know what to say.
My response to get info from Joe above would be:
On which node did you run this?
What parameters did you set/include?
At what time did you start and stop this process?
Did you have an error log?
To which Joe replies:
Parameters: a,b,c
I submitted it to the farm from work_cluster2
My bsub command: ....
Error log: dev/null (because, that is what he was instructed)
Great, no further output. I can't reproduce, I can't write a test, I can't track down the problem without running the whole thing myself.
So what is the potential solution:
1) Don't be afraid to show your user what looks like the internal workings of the script. Give them the whole stack output. With a friendly message too, of course.
2) Training to ensure that they write an error log (dev/null is not a log, but users don't know that often).
3) Training to ensure that they email you/bug report through rt the whole stack. If they know it will help you solve their problem, they ought to be happy. (Certainly users I spoke to said they would be happy to provide 200 lines of 'garbage output' if it meant that I could solve a problem even 25% faster.
I don't know who suggested it, or who started the culture. It might not be great for commercially sold apps, but certainly where you have internal or free (in it's many forms) software, then surely you shouldn't be worried about 'what the user might find out', because, I would be 95% certain, they don't actually care!
So big it up for confess and cluck, or even occasional print 'debug info trace points' as they ought to be the way of the future, in dealing with those bugs in a timely fashion.
I mean, sure, your average user is not going to have the foggiest idea what your error information is, and so a nice helpful message is good. But what about in a system,(which has many many many tests) where the average time to run in production takes 1000 times longer than any of your tests. And you get for Joe User
HI,
I received this error message:
An error occured in processing your request. Please let us know at line 360 in something.pl
Could you do something about this. I had been running this process for 10 hours.
Cheers
Joe
This is great, except, I know absolutely nothing further. Joe, understandably, reported the error he got (all of it!) and so thinks he has been helpful, but when you look at line 360, it's a croak of an eval block, where all your code is in multiple modules running inside it. AAAHHHH!
From looking around (and I may only have looked at a relatively small subset) people seem to think that letting the user see all of the error stack is a bad thing. Why? Speaking with some users, they don't understand the output, and would like an additional friendly message, but want to be helpful. They just don't know what to say.
My response to get info from Joe above would be:
On which node did you run this?
What parameters did you set/include?
At what time did you start and stop this process?
Did you have an error log?
To which Joe replies:
Parameters: a,b,c
I submitted it to the farm from work_cluster2
My bsub command: ....
Error log: dev/null (because, that is what he was instructed)
Great, no further output. I can't reproduce, I can't write a test, I can't track down the problem without running the whole thing myself.
So what is the potential solution:
1) Don't be afraid to show your user what looks like the internal workings of the script. Give them the whole stack output. With a friendly message too, of course.
2) Training to ensure that they write an error log (dev/null is not a log, but users don't know that often).
3) Training to ensure that they email you/bug report through rt the whole stack. If they know it will help you solve their problem, they ought to be happy. (Certainly users I spoke to said they would be happy to provide 200 lines of 'garbage output' if it meant that I could solve a problem even 25% faster.
I don't know who suggested it, or who started the culture. It might not be great for commercially sold apps, but certainly where you have internal or free (in it's many forms) software, then surely you shouldn't be worried about 'what the user might find out', because, I would be 95% certain, they don't actually care!
So big it up for confess and cluck, or even occasional print 'debug info trace points' as they ought to be the way of the future, in dealing with those bugs in a timely fashion.
Friday, 18 September 2009
readdir in the wrong order
This week we found an interesting bug (which seems quite obvious really, but still threw a spanner in the works) which was that using the readdir function doesn't equate to ls'ing a directory, i.e. you can't be sure that the files will be in alphanumeric order.
So, as you can probably work out, I had a method using this function to obtain a list of files in a directory, and then was passing the list to another program.
This program, although we make a promise that if two(or three if multpilexed) files for a lane generated with this program everything would be in the correct order, doesn't actually do any internal ordering of the list of files passed to it (where was that documented?) and so just readdir alone, passing the list of files, meant that they weren't guaranteed to be in the correct order needed. AAHHHH!
So, a quick sort to the list, and everything is now fine, but that was a bit of a surprise. So was spending 2 days trying to reorder the files that had been created wrongly, although I now have a script that can do this should it happen again, which on the farm only takes about 10 minutes.
So, as I said, an interesting bug. I will just have to remember that if I do anything with a list from readdir in future, run a sort on it afterwards, just in case.
So, as you can probably work out, I had a method using this function to obtain a list of files in a directory, and then was passing the list to another program.
This program, although we make a promise that if two(or three if multpilexed) files for a lane generated with this program everything would be in the correct order, doesn't actually do any internal ordering of the list of files passed to it (where was that documented?) and so just readdir alone, passing the list of files, meant that they weren't guaranteed to be in the correct order needed. AAHHHH!
So, a quick sort to the list, and everything is now fine, but that was a bit of a surprise. So was spending 2 days trying to reorder the files that had been created wrongly, although I now have a script that can do this should it happen again, which on the farm only takes about 10 minutes.
So, as I said, an interesting bug. I will just have to remember that if I do anything with a list from readdir in future, run a sort on it afterwards, just in case.
Saturday, 12 September 2009
My first CPAN module - update
I have been politely requested to rename my first module to MooseX::File_or_DB::Storage, so I have done so and it can now be found here:
http://tinyurl.com/po5voa
It is exactly the same as v0.2, just a different package name. MooseX::Storage::File_or_DB is scheduled for deletion off CPAN on Wed 16th, so I urge anyone to update now.
Cheers
Andy
http://tinyurl.com/po5voa
It is exactly the same as v0.2, just a different package name. MooseX::Storage::File_or_DB is scheduled for deletion off CPAN on Wed 16th, so I urge anyone to update now.
Cheers
Andy
Monday, 7 September 2009
My first CPAN module
So I have pushed my first ever CPAN module, MooseX::Storage::File_or_DB
http://tinyurl.com/l3xfkc
I blogged about this as I started the module before
http://vampiresoftware.blogspot.com/2009/08/filesystemdatabase-or-both.html
and this weekend I was finally able to finish the first release.
As of the moment, you need to use this by extending it
package MyClass;
use Moose;
extends q{MooseX::Storage::File_or_DB};
But I hope for a future release just to be able to
use MooseX::Storage::File_or_DB;
This gives functionality to enable you to write to either a file a JSON string, or a database, and re-instantiate the object from either.
It makes heavy use of MooseX::Storage ( http://tinyurl.com/nujf4c ) - a big thanks to Tomas Doran for writing this - for inspecting the object, and providing the ability to write out to a file as a JSON string.
I hope that this will prove useful. Please do read the POD/CPAN page before use, and contact me about anything you feel relating to this - all constructive comments gratefully received.
http://tinyurl.com/l3xfkc
I blogged about this as I started the module before
http://vampiresoftware.blogspot.com/2009/08/filesystemdatabase-or-both.html
and this weekend I was finally able to finish the first release.
As of the moment, you need to use this by extending it
package MyClass;
use Moose;
extends q{MooseX::Storage::File_or_DB};
But I hope for a future release just to be able to
use MooseX::Storage::File_or_DB;
This gives functionality to enable you to write to either a file a JSON string, or a database, and re-instantiate the object from either.
It makes heavy use of MooseX::Storage ( http://tinyurl.com/nujf4c ) - a big thanks to Tomas Doran for writing this - for inspecting the object, and providing the ability to write out to a file as a JSON string.
I hope that this will prove useful. Please do read the POD/CPAN page before use, and contact me about anything you feel relating to this - all constructive comments gratefully received.
Sunday, 6 September 2009
Backending Catalyst
I am starting to look at Catalyst as a method to display some quality control results that are coming out of our analysis pipeline.
ClearPress is a nice MVC webapp builder, but is quite a light weight framework, and uses Class::Accessor for object base. We would like to move towards using Moose based objects and need a way to integrate these into Catalyst.
I am currently working my way through the latest Catalyst book (Diment & Trout), but before this arrived I found we had the following book on our Safari Subscription - Catalyst: Accelerating Perl Web Application Development: Design, develop, test and deploy applications with the open-source Catalyst MVC framework - Jonathan Rockway.
Now, note, I had been through the Tutorial on CPAN, but couldn't find on there anything about using a Filesystem as a source for the model (Did I miss something?), but this book luckily had a section on doing so.
Firstly, why have we QC data in a filesystem?
When we run the pipeline, this all happens on a staging area, which we write everything to, and then copy all our data into long term archival databases. The QC data is no exception, but we only want to archive the final agreed data. Bioinformaticians don't seem to ever be happy with a first pass that fails, if there is any chance it could be improved (i.e. a new test pipeline version, could rerunning jut squeeze 2% more...). As such we want to view the data in exactly the same way from the filesystem as from the database, because we don't want it stored until the last possible moment.
What have we done for this?
My team have been producing Moose objects which are:
1) Producing the data
2) Storing in JSON files (MooseX::Storage)
3) Reading in JSON files (MooseX::Storage) to re-instantiate the object
4) Saving to a Database (Fey)
5) Re-instantiating from a Database
I've been working with iterations of the objects, using the files, but want the objects to just sort it themselves - I shouldn't know where the data has come from, and these objects should be used in (in fact are being written for) other applications.
Catalyst very much guides you to using a Database, and seems to prefer using DBIx::Class for this, so I need a way of guiding the Model to provide the correct objects, which are not generated directly from Catalyst helpers.
What did I do?
So in the above book, I found the section 'Implementing a FileSystem model'. This shows us how to create a Backend, which takes us out of the ordinary Model style, and the call the the Model returns this Backend object instead. We then use this Backend object to contain the logic which can be used to obtain the objects from somewhere outside of the Catalyst application, de-coupling the data models from the app, and therefore increasing flexibility and maintainability. As I said, these objects are actually being written within another application project.
This has been an interesting venture, which has enabled me to write a web application which only concentrates on the logic for the view, and leave the data handling completely to someone else. We should be production ready with the application within the week, and displaying data for the users quickly and simply.
What the betting someone asks if we can regenerate the data for all previous runs? I won't be betting against it, that's for sure.
ClearPress is a nice MVC webapp builder, but is quite a light weight framework, and uses Class::Accessor for object base. We would like to move towards using Moose based objects and need a way to integrate these into Catalyst.
I am currently working my way through the latest Catalyst book (Diment & Trout), but before this arrived I found we had the following book on our Safari Subscription - Catalyst: Accelerating Perl Web Application Development: Design, develop, test and deploy applications with the open-source Catalyst MVC framework - Jonathan Rockway.
Now, note, I had been through the Tutorial on CPAN, but couldn't find on there anything about using a Filesystem as a source for the model (Did I miss something?), but this book luckily had a section on doing so.
Firstly, why have we QC data in a filesystem?
When we run the pipeline, this all happens on a staging area, which we write everything to, and then copy all our data into long term archival databases. The QC data is no exception, but we only want to archive the final agreed data. Bioinformaticians don't seem to ever be happy with a first pass that fails, if there is any chance it could be improved (i.e. a new test pipeline version, could rerunning jut squeeze 2% more...). As such we want to view the data in exactly the same way from the filesystem as from the database, because we don't want it stored until the last possible moment.
What have we done for this?
My team have been producing Moose objects which are:
1) Producing the data
2) Storing in JSON files (MooseX::Storage)
3) Reading in JSON files (MooseX::Storage) to re-instantiate the object
4) Saving to a Database (Fey)
5) Re-instantiating from a Database
I've been working with iterations of the objects, using the files, but want the objects to just sort it themselves - I shouldn't know where the data has come from, and these objects should be used in (in fact are being written for) other applications.
Catalyst very much guides you to using a Database, and seems to prefer using DBIx::Class for this, so I need a way of guiding the Model to provide the correct objects, which are not generated directly from Catalyst helpers.
What did I do?
So in the above book, I found the section 'Implementing a FileSystem model'. This shows us how to create a Backend, which takes us out of the ordinary Model style, and the call the the Model returns this Backend object instead. We then use this Backend object to contain the logic which can be used to obtain the objects from somewhere outside of the Catalyst application, de-coupling the data models from the app, and therefore increasing flexibility and maintainability. As I said, these objects are actually being written within another application project.
This has been an interesting venture, which has enabled me to write a web application which only concentrates on the logic for the view, and leave the data handling completely to someone else. We should be production ready with the application within the week, and displaying data for the users quickly and simply.
What the betting someone asks if we can regenerate the data for all previous runs? I won't be betting against it, that's for sure.
Subscribe to:
Posts (Atom)