Friday 19 November 2010

Talk to Software East - Aglie Analysis Pipeline

Last night I gave a talk to the monthly Software East meeting, held by Mark Dalgarno at RedGate Software.

Big Thanks to Mark for inviting me, and everyone who showed up for an interesting discussion on Agile techniques. Also, thanks to RedGate for hosting.

The slides have been uploaded to Slideshare:



It was a great talk, and the first time I didn't get groans outside of work about having used Perl :)

Looking forward to the next Software East meeting in January, which will be Rachel Davies talking about Agile Retrospectives.
I saw Rachel give the second day's keynote at Agile Cambridge, and she was brilliant. She was also very nice to chat to at lunch that day. She has written the book Agile Coaching (Pragmatic Programmers) so for fear of overloading the session with people, I can thoroughly recommend going to this.

Tuesday 2 November 2010

Monday 18 October 2010

Day 2 of Agile Cambridge

After far too little sleep, again I got the first bus out of Haverhill off to Cambridge, luckily not feeling bad after too much beer.

Arriving about 8.30ish, I chatted with a couple of people from the previous day, and we went through for the second Keynote, Building Trust in Agile Teams - Rachel Davies. Rachel is the co-author of Agile Coaching, and talked a lot about trust. What is trust, how do we trust people (both socially and in the work environment), and how do we build trust. Examples included borrowing £20 of a member of the audience, and doing the falling backwards into a group of peoples arms (which worked, and Rachel actually seemed a bit scared of the volunteers going through with). This was an excellent keynote. Rachel is a great speaker, and presented the material in a very easy to understand way.

After a coffee break, for me it was back into the Gilb Theatre for Scrum/XP Add-ons workshop, presented by Jon Mullen and Paul Fairless from BSkyB. The organisation of this was different, in that, with a bit of PowerPoint trickery, they were able to allow us to choose the order of the presentation topics. We did some voting (What we preferred, what our teams did agile), we choose the order we would put 'stories' based on complexity (I clearly thought that baking and decorating a cake for 20 was more complex than washing and waxing a car). The method of this though was (I thought) better than story poker). A story card is put on the table, then each person in turn either puts a new card either side (or between), or moves a card. This continues until an order is established of complexity. After this, the cards are then graded 1->8 for measure of complexity. They gave an excellent description of the way that they work (1 week sprints, pair programming, office layout), and a great little video (with sweets!) of a scrum (that had a number of things wrong with it). It all finished with an attempt to win a bottle of bubbly, in a game like they play on a Friday afternoon.

Next was lunch, and possibly the biggest thing ever to render me speechless. Rachel caught me and complimented my talk on the Pomodoro Technique. I was quite shocked, and as such completely failed to mention how much I had enjoyed her talk (Sorry Rachel). Here was someone who is experienced in presenting, complementing me on mine. After a little discussion about how I might start bringing Agile into my team here (including the idea of a taskboard, that needn't be on full display the whole time), she moved off and I had another Chat with Martin from Aptivate who was telling me about the work they had gone out to do in Zambia bringing women out there up to speed with IT, including the setting up of an Internet Cafe and training staff. It's great to hear about people doing this sort of work, and a shame that they possibly aren't getting as much funding this year to do it as fully as they have done in previous years.

After lunch, the final workshops session. I opted for Visual Management for Agile Teams - Xavier Quesada Allue. In this, we were tasked with creating a Taskboard which would be the focus of the scrum, and would keep track of stories, tasks, if we were on track, who was doing what and backlog.

Here is the one my team came up with

101015_142251

Although, as we went around the room, no two boards were the same. One team (featuring Conny) opted for a Kanban approach

101015_143508

But the coolest had their stories on hearts, which moved along a track. If they stopped, they picked up hairs (a rolling stone gathers no moss) or broke if they were blocked in any way.

101015_145136

This was a great way of getting on task with using a board, and showed that I think if the team is involved in the design, it probably adds more ownership and encourages it's use more.

Xavier also showed us some pictures of taskboards he had helped develop for teams. This was an excellent workshop, and well worth attending.


After the final coffee break of the conference (a chance to make sure people had my twitter alias - setitesuk), we headed back into the Gilb theatre for a panel Q&A session 'Creating a Development Process for your Team: What, How and Why'. The panel was led by Giovanni Asproni, who I met the previous day in Bob Marshalls workshop, and who previously worked at the EBI (so right next door to me!). Beforehand he admitted that he was happy to do lead the session, since he didn't need to do much talking (I must remember that trick next time).

The panel consisted of a number of the speakers from the 2 days (although I can only remember the names of Rachel, Allan Kelly and Willem). The main topic eventually was a debate around pair programming, although other aspects of Agile got mentioned (and even the Pomodoro Technique got dropped in as noting it had become a discussion point - yay me :) ).

I must say, that after two days I was starting to flag a little, so picked up the least amount from this session, but it was interesting nonetheless, and rounded off the two days in a great way.

Mark closed off the conference, and we all left (after grabbing more choccies from the RedGate guys!)

Overall, the conference for me was a great success. An opportunity to find out lots about Agile process. A steep learning curve in places, and just a great opportunity to meet others who are producing great software. My favourite workshop/talk has got to go to Gojko and David for 'The Specification Game', for the great way of teaching us all what we do wrong (it is certainly true that you learn more getting it wrong than right), and favourite snippet goes to James A. Whittaker for being so surprised that so many of us Brits still use a phone book.

Of all my sessions that I had my time again, I would have chosen not to go to, would be Code Debt. As I mentioned in my previous blog, not because it was lacking, but that it was the least relevant to Agile.

Thanks to Mark Dalgarno and his team for organising a great Conference, at a fantastic venue. Thanks to all the speakers that I saw, if I could present even half as well as you all, I'd be happy, and your material was in general excellent. Thanks to everyone I met for engaging discussions, and thanks to RedGate Software for the beers!

I look forward to next year.

Saturday 16 October 2010

Day 1 of Agile Cambridge 2010

Agile Cambridge 2010

Day 1

With much excitement I left Haverhill on the first bus of the day in order to go to the conference. There was method to this madness, since the plan was drinks in the evening, and I wasn't going to pass that opportunity up.

I left with laptop, plan of the sessions I ideally wanted to achieve, and the bits for my lightning talk.

I arrived at Murray Edwards College around 8.30, and found those of my colleagues (Beth Jones and Conny Brunkvist) who were also attending. Also picking up a pack and some freebies from RedGate Software (choccies!)

At 9 we went through for the welcome talk (Mark Dalgarno - Software Acumen) and the first Keynote.

The Keynote was James A. Whittaker (Google) who delighted us all with his running of test at Google.
As well as unit tests and functional tests, they also have a suite of different Tours, which are designed to go off and fully test the whole applications under different situations, but which are named rather 'cleverly' (I would love to know what 'The Couch Potato Tour' is). He also showed us problems that had been sent in with Google Maps, including the best route to walk from Cambridge to Hull is taking the ferry via Holland.

He also seemed rather surprised that over half of the audience had actually used a phone book in the last 6 months. Clearly a difference between Americans and Brits.

After the first coffee break (good letting them be 30 mins, allows plenty of time to chat!), it was the first sessions with choices. I opted for the Workshop 'The Specification Game', since one thing I admit I am bad on is getting specifics up front, and in the end, this only proved it.

Gojko Adzic and David de Florinier introduced us to a 'simple task'. They were hiring (i.e. they were the customer - note that, I'll come back to it) us to do them a blackjack application. We were to go through the first iteration, and should produce them something which was playable. We needed a business/project owner (that was anyone who knew blackjack - me!) and at least one dev and 1 tester.

This was were we made our second mistake, (the first being that we completely forgot they were the customer). We went for having the dev and tester as separate people.

I spec'd out what I thought we could achieve in the first iteration, (not negotiating at all with the customer), and we went for it. I went for a playable version which wouldn't bother with betting, but would check blackjack, that the dealer would deal correctly, and would say if the player or the dealer won.

At the end of the time allowed, it is safe to say that we failed. Reasons:

1) Some of my specs weren't good enough
2) One of my specs wasn't dealt with
3) I spent too much time helping the tester, than guiding my two devs and the tester
4) I never consulted with the customer for any user acceptance tests
5) I never consulted with the user for exactly what they wanted in the first iteration, and negotiated that it was too much for my team

So, I accept the failure.

Initially afterwards we felt a bit aggrieved since the one thing that everyone in the room had failed to realise was that Gojko and David were our customers, and so we never felt we could consult with them, you notice that I mentioned that they did tell us this up front, so we all failed because we never got them to discuss with us the specifications. We chatted with Gojko nd David, and found that this was what they were trying to get across to us, and that this is the biggest problem with the specifications is that we forget to/or just don't get the customer involved in what to do that iteration. Also the other was that we never asked the customer for UATs, so when we finally got some, they inevitably failed. Also, no-one did the tests before the development. Whilst they said there must be at least one Dev and one Tester, they never said that they couldn't be the same person.

I think that this was an excellent session. One that I have certainly taken a lot away from.

After lunch, I went for the Hands-On session Code Debt (David Harvey and Peter Marks). This was a break away from Agile, but I was interested in seeing exactly what was meant by it (the term is bandied around a fair bit, but not with any real explanation). The best definition of Code Debt that was mentioned is 'It is code that you owe time to'.

We all have written terrible code (I am sure I still do). The purpose of the first exercise was to change/add to some javascript code to make a new set of tests of that code pass. We were given 10 minutes. Some in the group managed it, some didn't. What we were unaware of at the time was that one side of the group were given nice concise well written javascript, and the other just javascript that worked (it was all done via TDD), but had no best practice about it. As such it was essentially unmaintainable code, and us such, they weren't expected to be able to do the task.

And this seemed to be pretty much the point of the session. If the code just does the job, but hasn't been thought about (idiosyncrasies of the language, meaningful function/variable names, refactored) then there is some level of debt owed to the code to get it to that point.

It was a good workshop to have attended, but in this case, I think a little out of place at a conference about Agile, and possibly a bit too much time spent getting over the one point. Refactor and make sure that your naming means something. Still, well presented. (And I also discovered I was probably the only Perl programmer there - or at least willing to admit it).

Another coffee break, and the onto 'Understanding the Agile Mind: How Mindsets Transform as Organisations Rightshift Effectiveness' (Bob Marshall - Falling Blossoms twitter @flowchainsensei)

I had shared a few words with Bob at lunch (we seemed to have managed to be at all the same sessions so far), but hadn't actually twigged he was running this session.

The focus of this workshop was looking at how organisations worked at the effectiveness multipliers that give serious increases in production with lot less waste, but are so far above the norm, that the mindset seems 'Alien' to most. Working in small groups, we tried to come up with things that we thought companies that were at 0/1/2/3/4/5x effectiveness (1x being the norm) would be doing. Bob then mentioned
the 4 different types of organistional styles (Ad-Hoc, Analytic (sometimes called Mechanistic), Holistic/Synergistic and Chaordic) (I've just found a pdf paper by Bob about this)

I think that clearly this is a huge area to look into (I admit by this time of the day I was more thinking about the pub - sorry Bob), but looks incredibly interesting, and I shall be taking the time to read the above paper soon.

After another coffee break, we had the final session of the day, and a chance for me to Do My Stuff. So, obviously, I attended the Lightning talks (since I was presenting one). I should have gone first, but as with all things, the projection system decided to take that opportunity to have a nap, so I got a bit delayed.

The other talks were really interesting though. Nick ably stepped up to now go first, with some interesting insights into how agile and scrum very much mimics how humans have always done things, including that daily scrums are like sitting around the tribal campfire.

Bob then came up, with an explanation of how he came up with his twitter moniker. Tiss from RedGate (I think rather badgered into it by Helen) did some Improv very Whose Line is it Anyway. I then gave my talk on The Pomodoro Technique (see previous blog post), and then Alan Kelly came on to show how 'Doing it Right' against 'The Right (Business Aligned) Way to do it' affects the productivity, cost and sales. Took me right back to A-Level Economics, but a reminder of how things need levels of thought and balance.

After this, a move to the Castle Inn, for a well earned pint (or four) and food, all courtesy of those fine fellows at RedGate, who I spent most of the night chatting to, along with others who were there. (Sorry guys, I can't remember most of your names and it seems that there was no delegate list in the pack).

I left the pub (unfortunately rather abruptly - big apologies to those I was talking to at the time) in order to catch the bus back to Haverhill. I though some sleep might be useful before the following day.

Friday 15 October 2010

Lightning Tomatoes

I'm currently at the agile cambridge conference, having a great time finding more out about agile practices (what I'm doing wrong...and right).

In the lightning talk session I gave a quick 10 minute presentation of The Pomodoro Technique

Here are the slides:



The talk generated a fair amount of discussion (and some people saying that they will give it a go). I just hope with most it wasn't just the beer talking (Thanks to Redgate Software for that).

Friday 3 September 2010

The Tomato Cometh

So I have now been doing the Pomodoro Technique for 3 weeks, and I have to say, that it is working very well.

I am even writing this blog within a Pomodoro - watch for live end/starts and interrupts/failures

I am averaging 8 completed Pomodoro a day. You can see my end of day report sheet here.

report_table_20100903

If you took a look, you'll see that the most I get is 11, and the least is 5. There is no direct correllation between failures and successes, but some things to note:

The Boss is away: Less completed and more interruptions/failures as I am being (Pomodoro ends) requested in his place. (finished sentence and save)

Quick 5 minute break and I'm back (start)

Here is a one day todo when my boss was off

pomodoro_0820

Discussion with others: This is difficult, especially when I'm the one being asked, to keep to within 25 minute slots, so an hour can get lost outside of my daily tasks.

So, how is my day worked.

First hour - no Pomodori

8 -> 9am
get in, check emails and relevant blogs/websites.
write out plan for today. If there is nothing already on my radar, then set aside a Pomodoro to go through emails/RT to get new things onto my radar, and replan the day

9->9.05 - 5 minute break, ready for first Pomodoro to start.
From now on, I work in Pomodori, 25 minutes and then a 5 minute break.
(Just had a failure - interruption where I needed to discuss something with someone for > 30s - start new Pomodoro)

If I get an interruption of approx 30s (very quick question from my team members) then this only counts (to me) as a brief interruption, and I don't fail the Pomodoro. If it is longer than this, then the Pomodoro fails.

Now, I am expected to act on emails if they are urgent, so at the end of any Pomodoro or interruption, I check my inbox, and if necessary, replan my day for the next Pomodoro (or x) to act on an urgent request, else they get deleted or put to act later.

In my break, I typically get a drink, look out of the window, go for a quick stretch of my legs.

This tends to go until the end of the Pomodoro that occurs after 4pm. At this point, I don't start another one, but do the daily tidy up (although not of my desk :) ) and fill in my report chart, final check of emails, and twitter out my successful Pomodoro for the day, plus the last song I was listening to.

Interesting things I have noted over the last 3 weeks:

1) Time goes quickly

2) Breaking jobs up into smaller tasks really helps

3) I can't always predict the number of Pomodori a task will take, but I'm getting slightly better

4) Failing at 20 minutes can make it seem like a task has taken less Pomodori than it actually did

5) I can get more done this way (one set of tasks took about 1/2 day less)

6) A five minute break away increases the likelihood of the Eureka solution to the problem you spent the last 15 minutes looking at

7) Time really does go very fast

So now what?

I am going to continue with this, and I am planning a talk on it now. I think the technique really works for me, and I can track what I have done so much easier than just closing RT tickets/completing features in Pivotal Tracker.

I strongly recommend this to anyone, and if you need any more suggestion to give it a go, here is a paraphrased Tweet I read a couple of weeks ago:

The PomodoroTechnique quite literally saved my arse - Software developers at increased risk of Hemorrhoids.

So get up and move every 25 minutes - you'll be better for it.


(9 minutes left of Pomodoro, time for review of the post)


The two Pomodori this was written in

- first 15 minutes of P1 - taking pictures of my sheets and storing on Flickr
- review - fix all cases of lower case pomodoro(i) to Pomodoro(i)
- review - deciding that I would add this link to yesterdays To Do Today sheet
pomodoro_0902
- review - find out how to embed the photos instead of links

Wednesday 25 August 2010

MooseX::AttributeCloner v0.21

I have just completed version 0.21 of MooseX::AttributeCloner.

The only change here is that I have forced the command options to come back in a sorted order. This was discovered when we found that in some tests, on my developers MacBook, we get a different order of production commands than on the Linux boxes. As such, this is causing tests to fail. A fix of this to return in a fixed order should solve this problem.

It has been submitted to PAUSE (I hope, first time I have tried uploading from a tag in github) but you can also find it on my github account

github.com/setitesuk/MooseX--AttributeCloner

Monday 16 August 2010

3 Days of Tomatos

I've been umming and aahing over looking at the Pomodoro Technique as a way of personally increasing my capacity to focus on the task at hand, having heard bits about it on Twitter feeds.

Finally, I bit the bullet, and spent some money on the book 'Pomodoro Technique Illustrated' by Staffan Noteberg (Pragmatic Bookshelf) and read it.

So, all I need is a timer, pen, paper and that's it. Well, that I can achieve.

I had a go the first day after reading the book, and thought I had done reasonably well, but then it all went out the window.

So, two weeks later, (last Thursday to be exact) I decide to try again. In earnest.

What a success!

Although I have made a few errors in judging how many pomodoro certain tasks will take (which actually shows that some jobs needed cutting into more smaller tasks) I have felt more focused throughout the day and felt at the end, I have achieved what I needed to.

I have averaged 10 pomodoro over the last 3 working days, and feel in that this is a realistic target. Tomorrow, plan for 10 - lets see if I can keep at this.

Anyone who is reading this, and finds themselves procrastinating, then I recommend this at the moment. A 5 minute break every 25 minutes really helps, and it is very easy to concentrate on task for just 25 minutes at a time.

The biggest downside I find - my alarm goes off, and the song on my iTunes hasn't finished.

Thursday 29 July 2010

A word to the Mooses out there (Miice?)

Just found today with upgrade to latest Perl::Critic

Subroutines::ProhibitUnusedPrivateSubroutines

this throws a problem with all the many

_build_

as it thinks they are unused private subroutines, not seeing them elsewhere in the code

To fix (thanks to the CPAN documentation for helping me get to this) add the following in your (.)perlcriticrc

[Subroutines::ProhibitUnusedPrivateSubroutines]
private_name_regex = _(?!build_)\w+

This will pattern match and allow anything beginning _build_

Cheers

Thursday 22 July 2010

Difficult to track bug

Here was a difficult bug to spot (using 5.8.8 and 5.10.1, not tried on 5.12):



my $output_path = $self->output_path();
my $bam_filename = $self->bam_filename_root();

if ( some condition ) {

$output_path .= q{lane} . $self->position_decode_string() . q{/}.
$bam_filename .= $self->position_decode_string();

}


This was caused by a bit of lack of due care and attention after a bit of copy and paste refactoring.

Unfortunately, the code is perfectly legit, and parses as though the .= after $bam_filename is just a ., without also doing the concat to $bam_filename.

Bit of a pain in the backside to find this one.

Monday 12 July 2010

A small journey in Benchmarking

20ish lines of verbose code involving hashes, arrays and grouping, in comparison to a magic piece of regex which does in 3 lines the same thing.


use Benchmark q{:all};

my @lsf_indices = ( 1000,1001,1002,1003,1006,3000,3300,3301,3302,3303,3304,3305,3306,3998,3999,4000,4001,4002);

my %methods = (

regex => sub {
my $array_string = join q{,}, @lsf_indices;
$array_string =~ s/\b(\d+)(,((??{$+ + 1}))\b)+/$1-$+/g;
$array_string = q{[} . $array_string . q{]};
},
verbose => sub {
my ( $previous, $current_working_index );
my %consecutive;

foreach my $index ( @lsf_indices ) {

if ( $previous && ( $index == $previous + 1 ) ) {
push @{ $consecutive{ $current_working_index } }, $index;
$previous = $index;
} else {
$previous = $index;
$current_working_index = $index;
push @{ $consecutive{ $current_working_index } }, $index;
}

}

my @array;
foreach my $index ( sort { $a <=> $b } keys %consecutive ) {

if ( scalar @{ $consecutive{$index} } == 1 ) {

push @array, qq{$consecutive{$index}->[0]};

} else {

my $last = pop @{ $consecutive{$index} };
my $first = shift @{ $consecutive{$index} };
push @array, $first . q{-} . $last;

}

}

my $array_string = q{[} . ( join q{,}, @array ) . q{]};
},

);

cmpthese( 30_000, \%methods);
timethese( 30_000, \%methods);


Result


home$ ./benchmark.pl
Rate regex verbose
regex 6048/s -- -72%
verbose 21898/s 262% --
Benchmark: timing 30000 iterations of regex, verbose...
regex: 5 wallclock secs ( 4.96 usr + 0.01 sys = 4.97 CPU) @ 6036.22/s (n=30000)
verbose: 1 wallclock secs ( 1.37 usr + 0.00 sys = 1.37 CPU) @ 21897.81/s (n=30000)


Verbose lines of code is around 5 times faster. Happy :) Code I can read, and speed benefits as well.

Admittedly, a bit of an exercise, since this isn't really a bottleneck. ;)

Thursday 24 June 2010

File::Spit

John in my office a couple of days ago said we need the function 'spit' for writing to files much like we have 'slurp' in Perl6 (or by using one of my 3 fave cpan module Perl6::Slurp)

Since I have just had a couple of hours, I have just done this. File::Spit exports automatically the symbol 'spit'.


This method takes a file_path, $data and an optional delimiter flag.

From the POD:

spit:

will croak if once it has a file_path and a possible delimiter, it finds no data (note, this is different from an empty string or 0)

delimiter - if the delimiter matches against /\A[\s:=,.\/;]+\z/xms, it will be used, else it will just append to the file
perl false values (undef, q{} or 0) will be classified as though there is no delimiter given

will croak if it fails to write, with value of $EVAL_ERROR

Write to file, killing any previous versions of file (note, no warnings)

eval {
spit ( $FilePath, $data );
} or do {
your error handling here...
};

Append to file, creating if needed, but no delimiter

eval {
spit ( $FilePath, $string, 1 ); # note, it is just a true value, any perl false values will use write and kill previous file
} or do {
your error handling here...
};

Append as above, but with delimiter

eval {
spit ( $FilePath, $string, qq{\n\n} );
} or do {
your error handling here...
};

On success, this returns a true value (1)

This is probably completely unnecessary, or already exists on CPAN. If it doesn't, I'm happy to push to it, but for now, you can get it from github

http://github.com/setitesuk/File--Spit/tree/v0.1

This in theory could work with any type of data, but the tests only check strings.

I'm off on holiday now.

Happy Coding

Andy

Wednesday 16 June 2010

Is there anything wrong with this?

So, we all think reusing other code and not reinventing the wheel is generally a good thing.

And using Moose is generally good.

So, I want to test if something is an object. I am using Moose.

1) Create a large hash to compare keys against, and then do


my $not_object_ref = {
HASH => 1, ARRAY => 1, GLOB => 1,...
};

my $is_object;
if ( my $ref = ref $object ) {
if ( ! $not_object_ref->{$ref} ) {
$is_object++;
}
}

# stuff that uses boolean value of $is_object


2) Use a Moose Attribute and eval


has q{_i_am_an_object} => (
isa => q{Object},
is => q{rw},
);

sub _is_object {
my ( $self, $object ) = @_;
my $is_object = 0;
eval {
$self->_i_am_an_object( $object ); # test if this is an object
$is_object++;
} or do {}; # I like PBP and perl critic :)
return $is_object;
}

my $is_object = $self->_is_object( $object );
# stuff that uses boolean value of $is_object


I don't know if this is a pure abuse of the Moose Attribute system, or if there is a much neater way of doing it, but certainly it has merit over the whole keeping track of what refs are not objects. And certainly, if you want to check against the object type, you change the isa to the class name, and give it a better method name.

It's probably more of an abuse of eval :)

Andy

Thursday 10 June 2010

When not to shift...

I'm not a big fan of 'shift @array'. And I have just sealed why I think that is.

Anyone who has looked at my code will see that my methods always use array assignments
sub my_method {
my ( $self, $arg_refs ) = @_;
}

Even if it is just $self, rather than
sub my_method {
my $self = shift;
}

Today, I further found another bug whilst testing. I'm trying out Test::MockObject, and know that I want to return an arrayref, with 1 element in the array.

The code loops through 8 times, and I expect a result from this array each time, since the webservices I am mocking would provide this.

However, in my tests, I get a result of 1 true, rather than 8.

Why is this?

I check the arrayrefs are the same with some debug (they are). I check the array length each time through. 1st time, 1. All other 7, 0.

The reason is that instead of just assigning
my $ref_seq = $arrayref->[0];

I have shifted
my $ref_seq = shift @{ $arrayref };

So of course, I have within the code removed the element off the array, rather than leave it there, and since $arrayref goes out of scope in the code as soon as $ref_seq is assigned, either works.

This is one time when I would expect the 'real world' to have still been fine, but my 'test environment' can't cope.

Obviously, there are times when shift is most appropriate, i.e. you deliberately need to truncate out the first element because you must not allow that to get somewhere else as the array carries on, but that niggling little voice that keeps telling me never to shift has finally had a reason to give me.

Still, liking Test::MockObject :)

Wednesday 21 April 2010

It's been a while

It has been a while since I last blogged. Unfortunately, family health problems are still not in a resolved state, so it probably will be so again.

However, last week, I went on Dave Cross's advanced perl course/seminar. In fact, most of my team did.

Dave knew his stuff (and apologised that he was talking about the 'new' 5.10 when 5.12 was released less than 36 hours earlier).

The course was interesting, however, due to the limitations of time (1 day isn't long enough, and I do like Lab-time) we didn't cover as much about Moose/DBIx::Class/Catalyst than I would have liked.

However, he did cover Test::Builder, which I hadn't really seen before, and how to create your own bespoke tests which will fit in with the plan.

And so I just have.

I have released Test::Structures::Data today. At the moment it only exposes one method

is_value_found_in_hash_values

Now, this is not to detract from Test::Deeply (which I like a lot!) or Test::Data::Hash, however, I could not find any methods which go to see if a value is present in a data structure, and this was particularly what I was after.

It is on CPAN

http://tinyurl.com/y5owods

and my github repository is

http://github.com/setitesuk/Test--Data--Structures

I plan to add some other methods shortly. These are meant to be simple things though. For example, you may have a hash, and a value. You don't know which key the value should be for, or even care which key it belongs to. The above test just sees if it can find it in the hash values.

Hoping this might be of use to someone (anyone?)

Andy

Thursday 8 April 2010

Retrieving the $schema object from the resultset object

More for my own personal reference

Within a resultset (row) object

my $schema = $self->result_source->schema();

Example reasoning

User wants to be able to obtain (via helper method) all the public groups s/he doesn't belong to

sub public_usergroups {
my ($self) = @_;

my $schema = $self->result_source->schema();

my @public_usergroups = $schema->resultset('Usergroup')->search({
id_usergroup => {
'NOT IN' => $self->user2usergroups->get_column('id_usergroup')->as_query,
},
is_public => 1,
});

return @public_usergroups;
}

The returned set are then also Usergroup objects, so they have the methods relating to those.

Friday 19 March 2010

BarCamb3

Ive just been one of the lucky 20 to get a shiny ticket to BarCamb3 in the first batch of tickets released. WooHoo! Just hope now that my wife is better so that I can go.

More tickets are to be released next week, and the following week.

I went to the first 2 BarCambs, which where hosted by Matt Wood at the Wellcome Trust Sanger Institute. They were very interesting events, and I spoke at the 2nd one with a presentation called 'It's Too Much Information For Me', which came to me when I was 5 minutes away from the event in the car, thinking about a Duran Duran song and even the group hadn't been played on the radio.

That's the great thing about BarCamps. They are unconferences, so pretty much anything goes. You turn up on the day, with or without anything to talk about. Mill around having coffee, and put your name up to speak with a topic, or don't, or join some others.

The website for BarCamb is

http://barcamb.ltheobald.co.uk/

and the sponsors this year are Red Gate Software, Paypal and Taylor Vinters Solicitors.

I don't yet know what to talk about/discuss yet. Perhaps it will again come to me in the car. Also, I don't know how 2 days will be filled up, but whatever happens, it is bound to be interesting, and I'm hoping Simon turns up with Mbed again. (I want to play this year please!)

Andy

signup banner

Thursday 11 March 2010

Artificial Intelligence vs the human brain

Here is a plea to all people working on Artificial Intelligence.

Please think about what you are doing, and try to avoid loose wiring.

My wife has suffered from Post-Natal depression ever since the birth of our son 4 and half years ago. Yesterday, I took her into hospital again to ensure that she is safe (she is plagued by voices at the moment) whilst the new medication ramps up to a level which is therapeutic for her.

This, as you may expect, is quite distressing for all of us. We have support systems in place though, and my work are being frankly fantastic about it all.

It gets me thinking though. I can program computers to get them to behave in particular ways, but how do you rewrite the programming in a brain.

I recently read 'Pragmatic Thinking and Learning - Refactor your Wetware' by Andy Hunt (www.pragprog.com).

This is great for yourself, or even trying to suggest to others. I have found some easy to apply tips and some which I need to get round to trying, but one thing it shows is that we have set ways of doing things. Our Logic/Linear-mode (L-mode) tends to be dominant, and seems to have the most influence, unless, as the book suggests, we deliberately try to get information from our Rich-mode (R-mode).

The key thing with this though is that we control the flow of information, we determine if we will do it. We seem to have a controlling Sensible-mode (S-mode).

And this is what I like about Computer Programming. A computer really only sources from L-mode, and it allows the L-mode to control it. So I can write a program that is logical, instructs the computer, and it doesn't get anything else influencing it (well, assuming I have taken care of external running factors such as OS, file locations...)

My wife though, is not controlling her own thoughts. What is, we don't know, but not her. Her S-mode seems to lose it's control. Whether the thoughts are L-mode or R-mode based (traditionally, it would look like those things are "Right-brain" thought processes, although when she explains her thoughts, you could argue the Logic-mode is having a say).

Luckily, this time, my wife's S-mode seems to be working enough to stop her finishing the act, but she is getting somewhere close to it (popping enough pills out of a blister pack, but at the last minute, throwing them in the bin instead of taking them).

So where am I going with this.

My opening request is to think about AI, and not introduce loose wiring. Truly Artificial Intelligence should be able to think both in L-mode and have R-mode, but with an overriding S-mode to control all the thoughts.

I like programming because what I produce only has to act logically. I am in turmoil because my wife has some 'loose wires' which are not allowing her to act completely rationally, and I can't fix the bug in the program to correct it.

If AI is a true goal of the computer technology industry, then lets hope that the coding which goes into it won't allow for the loose wires which screw up the controller, or else we might just have a lot of depressive machines which need careful looking after whilst we feed them lots of pill programs to try to make them better.

Here's hoping that the pills my wife is now taking soon start to sort out her programming. I'm glad I never became a psychiatrist.

signup banner

Thursday 4 March 2010

crontab or daemon

So here is an interesting choice I need to make.

I have just rewritten a bit of code to email interested parties when a run with their data on it reaches 2 points. One when it reaches run complete (i.e. the instrument has done all it's processing) and then again when the data has been post-processed and qc'd and deposited in the central archive space for them to obtain.

I'm quite pleased with the code. It is more robust than the previous hack which we had never intended to be all encompassing, and actually mails the parties that should be interested (rather than some 'user' which may or may not be the right person).

It is, of course, also written using Moose.

However, now I have to decide, which do I choose, a cronjob, or a daemon process.

Cronjob:

Pros - very quick. Just decide how often to launch it, and run the script.
Cons - need to remember which node the cronjob is running, need to do something with the outputs (logs, etc), need to ensure that jobs don't relaunch on top of each other

Daemon:

Pros - Use a monitor to keep us informed it is still running, cyclical so won't launch over each other, write to a log file easy
Cons - Need to write a daemon controller script

I'm sure that there are others, I'm mostly babbling and writing this down as I think. Certainly, for the first release of this, I will start it as a cronjob, but down the line, I think I will move this to a Daemon, once the script has been in a production environment for a while. (i.e. we know it is working correctly!)

Thursday 25 February 2010

Readonly::Scalar, $VERSION and Module::Build/Install

As I blogged before, I have just rebuilt my development area. Within this, I installed the latest version of Module::Build (0.3603).

Now, two things have happened since this. The first, I mentioned in my post on the latest release of MooseX::AttributeCloner, about the fact that 'passthrough' is being deprecated as an option for dist producing a Makefile.PL.

I won't go into that one now, but the second is potentially more disastrous.

Before I go any further, I would like to make it clear that this is not an attempt to slag off the people who write/maintain Module::Build. The tool is extremely useful, and I got a very helpful response from David Golden about my RT ticket. I would like to say Thankyou for producing the tool. The below is possibly more our abuse of it than their failings with it.

We use Test::Perl::Critic to monitor our coding standards. (OK, I accept that it is a set of guidelines, and is in no way compulsory, but it is a start.) Perlcritic wants $VERSION to be a constant, and to do this within the code, we use (as for all constants)

Readonly::Scalar our $VERSION => do { my ($r) = q$LastChangedRevision: 8362 $ =~ /(\d+)/mxs; $r; };

(Again, yes, taking the version number from a version control system is supposedly 'not good', but it works for us, and others I know)

When I run perl Makefile.PL, I get the following:
(note, I use the passthrough or small Makefile.PL and perl Makefile.PL to run Build.PL)

Creating new 'MYMETA.yml' with configuration results
Error evaling version line 'BEGIN { q# Hide from _packages_inside()
#; package Module::Build::ModuleInfo::_version::p56;
use Module::Build::Version;
no strict;

local $VERSION;
$VERSION=undef;
$vsub = sub {
Readonly::Scalar our $VERSION => do { my ($r) = q$LastChangedRevision: 8362 $ =~ /(\d+)/mxs; $r; };;
$VERSION
};
}' in /this/package/lib/module.pm: syntax error at (eval 92) line 9, near "Readonly::Scalar our "
BEGIN not safe after errors--compilation aborted at (eval 92) line 11, line 36.

failed to build version sub for /this/package/lib/module.pm at /Users/ajb/dev/perl/5.10.1/lib/5.10.1/Module/Build/ModuleInfo.pm line 332, line 36.

WARNING: Possible missing or corrupt 'MANIFEST' file.
Nothing to enter for 'provides' field in metafile.
Creating new 'Build' script for 'my_project' version '8391.'

OK, in this case, it's a warning, but the Build file is created, and I can do what I need to.

However, updating the Build.PL 'requires' to include some other package libs, the problem becomes more serious.

Error evaling version line 'BEGIN { q# Hide from _packages_inside()
#; package Module::Build::ModuleInfo::_version::p5;
use Module::Build::Version;
no strict;

local $VERSION;
$VERSION=undef;
$vsub = sub {
Readonly::Scalar our $VERSION => do { my ($r) = q$LastChangedRevision: 8212 $ =~ /(\d+)/mxs; $r; };;
$VERSION
};
}' in /another/package/lib/module.pm: syntax error at (eval 28) line 9, near "Readonly::Scalar our "
BEGIN not safe after errors--compilation aborted at (eval 28) line 11, line 19.

failed to build version sub for /another/package/lib/module.pm at /Users/ajb/dev/perl/5.10.1/lib/5.10.1/Module/Build/ModuleInfo.pm line 332, line 19.
Couldn't run Build.PL: No such file or directory at /Users/ajb/dev/perl/5.10.1/lib/5.10.1/Module/Build/Compat.pm line 335.

Here, because it can't identify the version of the 'external' module, it has croaked out.

I submitted a bug report, and David Golden (Big thanks to him for responding) suggested that before the line could perhaps be made

use Readonly; Readonly::Scalar our $VERSION => do { my ($r) = q$LastChangedRevision: 8212 $ =~ /(\d+)/mxs; $r; };

since the problem is in the eval block.

This is fine to do for internal modules, but a problem for anything we install centrally from CPAN (maintenance of code, root access, etc).

I tried using Module::Install as an alternative. This carps errors in both cases, but in both cases doesn't cause the Makefile not to be created.

This post therefore comes down to 2 things.

1) Information to anyone who really cares or reads this blog.
2) Are we the only people who use Readonly::Scalar to declare 'our $VERSION'? (in which case, MooseX::AttributeCloner is possibly the only module on CPAN which does this)

Thoughts and comments welcome, although I would appreciate people not calling us Idiots (or stronger) for using Readonly::Scalar to 'constant'ify the variable (or at least not without good reason). We have looked at use version;, but it doesn't seem to be right with using a version control value.

Again, thanks to the authors/maintainers of Module::Build and Module::Install for these wonderful tools.

Friday 19 February 2010

MooseX::AttributeCloner v0.2

Yesterday I released v0.2 of MooseX::AttributeCloner. This is just a bugfix release, thanks to those lovely people over at CPANTs.

The problem was that I missed a file in my MANIFEST, so when I built my distribution package, it left it out. Upshot - tests failed.

This has now been fixed, however, I since discovered a deprecation on Module::Build, which I have fixed.

I had initially set up my Build.PL file to use

  create_makefile_pl => 'passthrough',

which generated a Makefile.PL, which loaded Module::Build if not installed.

However, this feature is deprecated, and may be removed, since newer versions of CPAN.pm/CPANPLUS and 5.10.1 accept the 'configure_requires' option. So, I have converted to using

  create_makefile_pl => 'small',
  configure_requires => { 'Module::Build' => 0.3603 }

in Build.PL. This is the new way to do it. It is mentioned in the POD and README.

The new version can be found on CPAN here

http://tinyurl.com/ylztfvz

Cheers

Andy

Wednesday 17 February 2010

Generated code from DBIx::Class and Test::Perl::Critic

We have hit an interesting thing in our ever changing code base.

We want to start managing the database schema and use the power of DBIx::Class to keep the code and database in sync by auto generation.

However, we also want to keep to coding standards by running Test::Perl::Critic at maximum severity over the code base.

Problem - the auto generated code fails tests, but if we modify above the comment line

#DO NOT MODIFY ANYTHING ABOVE THIS LINE

we lose the ability to run regeneration of the code if we make any database changes.

One solution is to say, don't run Perl::Critic over the resultset files. However, this means we need to separate out any manually written code, which should be here (running Catalyst, etc), or we never Test::Perl::Critic our own code. It also means additional maintenance to our auto test each time we add more files.

Another is never to be able to auto-regenerate and manually keep the database and code in sync.

So, has anyone come across any solutions to this problem? All comments gratefully received.

BTW - we understand that

1) TIMTOADY - some people write good maintainable code which doesn't follow Perl::Critic standards.
2) It would be very difficult for maintainers of code which generates code to constantly keep up with the latest Perl::Critic.

Tuesday 16 February 2010

New release of MooseX::AttributeCloner

I released a new version of MooseX:AttributeCloner yesterday to CPAN.

Here are the changes:

1) BugFix - CPANTs put in a bug report that MooseX::Getopt was not in the dependencies list in the Build.PL module

2) You can now do

  my $NewObject = new::object->new_with_cloned_attributes($CurrentObject);

instead of only

  my $NewObject = $CurrentObject->new_with_cloned_attributes(q{new::object);

However, to do this, both objects need to use the MooseX:AttributeCloner role. This is on my TODO list that $CurrentObject would only need to be a Moose object, and not have to utilise the MooseX:AttributeCloner role.

It's out there now. Any feedback appreciated.

Andy

It's amazing - how many more CPAN dependencies

It's true, if you need something done, someone may well have done it before and released a CPAN module. However, there are some that I think might have not been necessary.

After the initial install of my dev area (see last post), I checkout of svn and git my projects again, and started work. However, over the last week, I found a number of other lacking CPAN modules, which I then needed to download.

LibXSLT:

Download Latest version from ftp://xmlsoft.org/libxslt/

./configure --prefix=$HOME/dev
make
make install

Further CPAN modules:

i Cache::Memcached
i XML::Generator
i File::Type
i Math::Round
i Data::ICal
i Date::ICal
i Statistics::Lite
i MIME::Parser
i Perl6::Slurp
i Sys::Filesystem::MountPoint
i XSLT::Cache
i Net::Stomp
i Net::Stomp::Receipt
i GD::Graph::bars3d
i IO::Prompt
i Parallel::ForkManager
i HTML::Tidy
i SQL::Translator
i http://search.cpan.org/CPAN/authors/id/D/DR/DROLSKY/List-AllUtils-0.02.tar.gz (dodgy md5, so use full url)
i Fey::ORM
i Fey
i Fey::DBIManager
i Fey::Loader

I'm not really sure why we particularly needed List-AllUtils, since List-Utils and List-MoreUtils are already being used, but someone has done so. Also, I am not sure why this (and List-MoreUtils before it) had dodgy md5sums. (I should probably put in an RT ticket).

Anyway, just an update.

Monday 8 February 2010

Rebuilding my development area

I thought it was about time to get around to rebuilding my development area, for a few reasons:

1) Housekeeping - My dev area was getting a lot of junk floating around, and rather than just go through and delete, I thought it better to restart

2) About time I upgraded to perl-5.10.1

3) I need to start up a VM soon, and thought it a good opportunity to set make notes about what was needed

4) cpan/cpanplus wasn't working for me

5) I've never had GD working properly, and I could be about to lose my desktop at work

So plenty of reasons. I also wanted to try to structure how I setup various apps, so that it should (in theory) be easier to upgrade an of them. Getting further on, I think this might not be so worthwhile, but at least it's a try.

Here is 'What I Have Done' so far

INITIAL SETUP:

In $HOME

mkdir dev
cd dev

This gives me a base dev directory to use

PERL:

mkdir perl

Download the version of perl you want to install

mkdir perl/ (i.e. mkdir perl/5.10.1)

Unarchive the download and go into the directory for created from unarchiving and do the following

./Configure -des -Dprefix=$HOME/dev/perl/
make
make test (go away and make a cup of tea)
make install

This will now give you in $HOME/dev/perl/ the bin/, lib/ and man/ directories

Once you have done this, symlink this version to $HOME/dev/perl/current, and add $HOME/dev/perl/current/bin to $PATH

This should make your default perl $HOME/dev/perl/current/bin/perl

Since you have done this, if you now want to download and try another perl, then you can do the same, and just switch the current softlink


LIBGD:

You need to download
freetype-2.3.11.tar.gz
jpegsrc.v8.tar.gz
libpng-1.2.23.tar.gz
zlib-1.2.3.tar.gz
gd-2.0.35.tar.gz

unpack and install

freetype

cd freetype-2.3.11
./configure --prefix=$HOME/dev
make install
cd ..

jpeg-8
(need to look at)
cd jpeg-8
./configure --prefix=$HOME/dev --enable-shared --enable-static
make
make install
cd ..

zlib-1.2.3

cd zlib-1.2.3
./configure --prefix=$HOME/dev
make
make install
cd ..

[edit] libpng-1.2.x

cd libpng-1.2.x
CFLAGS="-I$HOME/dev/include" LDFLAGS="-L$HOME/dev/lib" ./configure --prefix=$HOME/dev
make
make install
cd ..

[edit] gd-2.0.35

cd gd-2.0.35
CFLAGS="-I$HOME/dev/include" LDFLAGS="-L$HOME/dev/lib" ./configure --prefix=$HOME/dev --with-png=$HOME/dev --with-freetype=$HOME/dev --with-jpeg=$HOME/dev
make INCLUDEDIRS="-I. -I$HOME/dev" LIBDIRS="-L$HOME/dev" LIBS="-lgd -lpng -lz -lm" CFLAGS="-O -DHAVE_LIBPNG"
make install
cd..

For these, I chose not to create individual versions of them. You will also note that libpng is 1.2 and jpeg is V8 but says need to look at, since this doesn't seem to work with this version of gd. However, since I mostly create png images, I'm not too concerned at this time. Must sort it though eventually.

Graphviz:

This is needed for installation of some CPAN modules
http://www.graphviz.org

Follow instructions on how to install, using $HOME/dev as the prefix

Again, no version specific route taken

SLEEPYCAT libdb-4:

Download from Oracle

unarchive latest version and install

cd build_unix/
../dist/configure --prefix=$HOME/dev
make
make install

Again, no version specific route, and needed for some CPAN modules

CPAN modules:

cpanp is the recommended method to download and install modules from cpan

type

cpanp

and the interactive shell will be launched

If this is the first time, then enter the following

s conf prereqs 1; s save

This will save some of the hassle of needing to confirm installation of required modules

These are chosen because I need to set up a webserver, I work in a Bio place, and some are personal choice. Obviously, if you need others, or not some of these, then pick and choose. They are also loaded in this order for convenience and dependencies.

To install a cpan module, just type

i

i Bundle::LWP
i LWP::Parallel::UserAgent
i YAML::Tiny
i Module::Build
i Module::PortablePath
i Task::Moose (select all the optional loads)
i IO::Stringy
i Calendar::Simple
i List::MoreUtils (This had a checksum error, so manually downloaded)
i DateTime
i DateTime::Format::ICal
i iCal::Parser
i Digest::SHA1
i Class::Std
i Crypt::CBC
i Crypt::Blowfish
i MIME::Lite
i DBI
i DBD::mysql # force install if you've no test database available, also requires the mysql client development headers - mysql_config needs to be in your $PATH (probably ~/dev/bin).
i DBD::SQLite
i Tie::IxHash
i XML::XPathEngine
i XML::Parser (again, I got a dodgy md5)
i XML::XPath
i HTML::TreeBuilder
i XML::SAX
i XML::Simple
i XML::Handler::YAWriter
i XML::Filter::BufferText
i MLDBM
i Jcode
i Spreadsheet::WriteExcel
i Unicode::Map
i Apache::DBI
i Readonly (another dodgy md5)
i XML::FeedLite (causes lots of prereqs to be installed - would suggest a cup of tea if you have selected auto download of prereqs)
i Chart::OFC
i YAML
i Digest::SHA
i Ace # force install if fails to make as it may have problems connecting to Ace database during tests
i Bio::ASN1::EntrezGene # force install if fails as it looks as though for tests it needs a non-existent CPAN module
i Bundle::BioPerl
i GD (Why has this failed tests?)
i B/BI/BIRNEY/bioperl-1.4.tar.gz # Requires sleepycat libdb-4 to pass tests
i Bio::Das
i Bio::Das::Lite
i App::Ack

manually download and install DB_File - as you need to Change config.in to point at dev/lib and dev/include
manually download and install BerkeleyDB - as you need to Change config.in as above.

APACHE and MOD-PERL:

Apache = httpd-2.2.14;

http://httpd.apache.org/download.cgi

in dev, mkdir -p apache/2.2.14
cd apache
ln -s 2.2.14/ current

This gives space to install this version of apache into, and a softlink to the version we want to use (similar to perl above)

export LD_LIBRARY_PATH=$HOME/dev/lib
./configure --prefix=$HOME/dev/apache/2.2.14 LDFLAGS="-L/$HOME/dev/lib"

add $HOME/dev/apache/current/bin to $PATH

mod_perl 2.0:

http://perl.apache.org/download/index.html

Get latest version of 2.0
$HOME/dev/bin/perl Makefile.PL
# follow instructions, e.g. apxs is at $HOME/dev/apache/current/bin/apxs

make
make install

Change/create the $HOME/dev/apache/current/conf/httpd.conf and $HOME/dev/apache/current/conf/perlconfig.ini as you need to.

Catalyst:

Now the biggie. Catalyst has lots of dependencies. It will take some time, plus it is interactive.
Just install everything - except the extra DBD supports.
You can do them in your own time, but they may make the Install fall over now, which you don't want.

cpanp
i Task::Catalyst

If you have got through this, then congrats.

I have also downloaded into my dev area subversion and git, and have tried to do ImageMagick (although this is erroring that my C compiler won't compile executables, even though it has done svn and git).

subversion:

Retrieve the latest version and dependency from http://subversion.apache.org/source-code.html
unpack both, the dependency folder should end up in the same directory, and will then be installed with svn

mkdir -p $HOME/dev/subversion/
cd $HOME/dev/subversion
ln -s current

cd into unpacked folder

./configure --prefix=$HOME/dev/subversion/ --eprefix=$HOME/dev/subversion/
make
make install

add $HOME/dev/subversion/current/bin to your $PATH

This will enable you to have/try multiple versions of svn in the same way as Perl and Apache above

git:

Retrieve the latest version from http://git-scm.com/
unpack

mkdir -p $HOME/dev/git/
cd $HOME/dev/git
ln -s current

make configure
./configure --prefix=$HOME/dev/git/
make
make install (had to do as root)

ImageMagick:

problem with my gcc version at this time

mkdir -p $HOME/dev/imageMagick/
cd $HOME/dev/imageMagick
ln -s current

./configure PREFIX=/Users/ajb/dev/imageMagick/6.5.9 EXEC-PREFIX=/Users/ajb/dev/imageMagick/6.5.9 LIBS=-l/Users/ajb/dev/lib --enable-shared --disable-static

After this, you should have a nice fairly 'clean' version of a dev area. If you want ot install other stuff, then I would recommend the method suggested for versioning the download you have. (I can also recommend the MOCA installation idea for mysql, although I choose to not have that in my dev area).

Once inside this, I then create folders for my projects, using svn or git to version control within those folders, just adding the directories to my path as I need to.

Note: I am using MAC OSX Leopard. At times for the make install, I have needed to sudo make install. I accept no liability for anything that happens should you follow these instructions on any system, but hope that they might be useful for anyone who would like to set up a dev/test area, but are not sure how to go about it.

Cheers

Friday 29 January 2010

Call a Spade a SPADE!

AAARRGGGHHHH! Once again, I have found myself trawling around some code (ours) which sits on top of someone elses API representation because someone just can't call a spade a spade.

Has the world gone mad over the last year? What is the benefit of making people trawl through some huge long chain of calls just to find out that your asset is (or is not) what you thought it was.

OK, so what is the issue?

Imagine you have Food that you want to describe. Now, each and every item of food is an asset to you.

On top of that, some of those have child 'assets'. So I want to some Weetabix and Banana for Breakfast.

I look in my 'cupboard' (an asset I have) for the Weetabix. What I find is a whole store cupboard full of further 'assets'. So, I check each asset to see if it is Weetabix. I find an asset which is a 'cereal box', containing 24 'assets'. I check its contents, expecting to find Weetabix.

But wait. It also contains 'assets'. Eh? I interrogate the first of these assets, and find that it is in fact a Weetabix biscuit, but what did I miss? Why not have a subgroup or collection name to the cereal box which describes the assets it contains.

It goes further. Someone has taken my fruit bowl away, and instead replaced that with an 'asset'. In this case it tells me it's a bowl, but not what of? Again, I find myself searching down to discover where my Banana is.

And don't get me started on the milk...

I work in a fairly good object oriented world. I accept that objects can have different (and sometimes multiple) class inheritances, (Weetabix is a 'Cereal', which can also be 'Breakfast', 'Food') but why try to force someone to go down from the top - I have an Asset, it is food, that is also Cereal, that I might eat for Breakfast, it contains further assets, which are CornFlakes. That's not the box containing Weetabix then, Great, lets try the next one.
(Of course, finding CornFlakes might make me suddenly change my mind, but my mind 'asset' has had Weetabix coded into it, so I will only accept that today).

There is a reason that Food comes in well labelled cartons (usually). It is so that you don't need to open that carton to find out what is in it (I would imagine Supermarkets not being too happy about that!). I'm all for class inheritance (or Role inheritance in the fantastic case of Moose), but I should be able to get my Weetabix, and then ask questions (should I want to) like: Are you a cereal, breakfast, (an Asset?)?

(It is true that Supermarkets tend to put food into groups, in aisles, but they are usually well labelled as to what you will find in the aisle, you don't need to go down each one).

Rant over. Now, how should I describe Chocolate? Asset, Food, Meal, Essential...

Monday 25 January 2010

Fighting with a LightSaber

Today's coding could be fun, as soon as the Ibuprofen wears off.

Yesterday I hit 35, and my son, already a huge star wars fanatic at 4, gave me LightSaber Duels for the Wii, and a Lightsaber attachment for the WiiMote.

Fantastic. This was what I wanted the Wii for. It took my wife to want it for WiiFit to actually have a valid reason to get it, but I can finally be a Jedi Knight.

Or so I thought. I hadn't quite accounted for my fitness levels. I play Badminton at least once a week, and have been doing some exercise, but after 2x45 minutes of Duelling, my Bicep is starting to seriously complain.

So, what has this told me about todays work?

1) Write as little code as possible. Always the mantra of a coder, since you can't have bugs in code that was never written.

2) Try to take those microbreaks that H&S are going on about.

3) Don't sit there wishing to play your latest computer game, when you should be coding. You need to take a break from that!

Some inane ramblings from me today. If you have got this far, then don't forget to raise a glass of whisky to Robert Burns today, and eat plenty of Haggis and Orkney Clapshot.

Monday 18 January 2010

From Clearpress to Catalyst

We have had a minor problem with our tracking system which has made it difficult to give out completely to the outside world. It is due to the deployment onto our webserver setup.

In order to be able to 'give away' responsibility for running servers, and some design aspects, we have tied it to the our works internal modules for locating lib files (WorkPaths) and content/page styling and authentication (WorkWeb).

Unfortunately, because of this, without a lot of work, we are rather stuck with it's limitations, and so there is 'work to be done' if outside people want to use it.

So, now, I am taking a break from working just on the pipeline, and attempting to break it into a standalone application, using Catalyst. Sorry Roger, ClearPress is great, but we need something with a bigger support network.

Now, ClearPress works. I still am a fan of, it it ain't broken, don't fix it. And Catalyst has many methods of the MODEL being used, so for this reason, I have worked on the following strategy to move it.

1) Leave the models alone.

ClearPress already has a good way of talking to the database, the models already have most of the supporting code in them, so why change this. Catalyst can use other models as a back end, so I am going to leave these as they are for now. This does mean that to operate, you need two frameworks installed from CPAN, but ClearPress is pretty lightweight in it's dependencies, so I don't feel that this is an issue for now.

2) Keep the templates.

The preferred templating system for both ClearPress and Catalyst is Template Toolkit (much to my new bosses wish that Mason might be better - i.e. using PERL, but I have wittered on about this in the past, so won't here). This is great, because all I have to do is move the templates to the Catalyst structure, with a small amount of recoding.

Here, I have also gone for the approach of putting the files into directories named like their parent Controller/View, so the directories are easier to navigate.

3) Views/Controllers.

This is a semantic thing. In ClearPress, they are called Views, in Catalyst - Controllers. This is the main part of recoding, because the methods are often autogenerated for you in ClearPress, or at least must follow a strict pattern. Catalyst operates differently here (Chained dispatch, different location to find form params, etc), so this is the slow part.

However, again, here is an opportunity not to be missed. We got a little out of the Fat Model, Thin Controller/View philosophy in the ClearPress app (not a failing of ClearPress, it is very easy to do the same in Catalyst/Rails or probably any other MVC framework, it was pure laziness on our part as developers). So, I have taken it upon myself, to move code to the relevant Model where necessary.

Some URLs have changed, and this is again a clear difference between the two. Particularly trying to make the URLs RESTful, but I am managing it.

4) Styling and JavaScript

Most of of our styling uses css, so I am taking the opportunity to create a css sheet with everything that is needed, and hopefully not any more. Our internal web styling is changing, and also, for external release to other users, they need a css sheet which will work for them.

A similar thign with JavaScript. However, a big change is the fact that the core web here is moving from Prototype/Scriptaculous to jQuery. I have already seen some instances that are a problem, since we have coded against P/S and there are conflict with jQuery. So again, I am trying to 'shield' the app against conflict here.

So far, this move is going well, and pretty rapidly. The CSS/JavaScript is pretty much complete. The templates take minutes to alter. So, I can concentrate on the Controllers, and refactoring some code to the Model, where it belongs. Ultimately, the intention is to move the Models as well. We have already run DBIx:Class over the tracking database, so it shouldn't take much to 'fix' them with the code needed, but small steps, and they ain't broken.

The only thing I have left off here is authentication. The new boss said he'll deal with that. Currently, I am permanently signed in as an admin, so we shall see how that goes, when I have migrated all that I can.

Speaking of which, better get to it!

Monday 11 January 2010

I Don't Play WoW

Over the last few weeks, I have noticed a surprising large number of phishing emails in my inbox. All allegedly coming from blizzard.com relating to my World of Warcraft account.

They have improved as well.

1) The first ones where every 3-4 days, telling me that I appear to be trying to sell my WoW account. This is against the rules, and I am in danger of having the account shutdown as per guidelines if this can be proven.

- I find this difficult to believe for 2 reasons:
i) I don't have WoW account or even own/play a copy of the game
ii) If I did, I doubt (from past gaming experience) I'd ever have one worthy enough to actually sell.

2) Then came the ones with all the misspellings that you 'would never notice'.

- I wouldn't follow these links because:
i) See 1i
ii) I have a pretty good grasp of the English Language, and I can spell (I only ever failed 1 spelling test at school, and the word did have more than 3 syllables)

3) Then a ramp up of number - 2 a day, and the misspellings disappeared.

- I wouldn't follow because:
i) See 1i
ii) I am not as thick as two short planks

Now, Blizzard, if you are trying to contact me, then please feel free to cancel my account. My email address is already being fraudulently used, and so I am happy for you to ban it (please note, the email address used is different to this one).

If, however, guys who are phishing for the account details - please give up. I am getting rather tired of picking apart your emails for potential problems. You need to realise that, if this is to work, you need to be a little bit more subtle about your approach than you are. People are waking up to the fact that these emails are simply not real - the Ebay/Paypal ones have stopped, just give it a rest. Why don't you take up real 'Fishing'? Perhaps you could make money selling whatever you catch, surplus to what you eat. Start a business, and become a useful member of whichever society you live in.

For anyone who does read this blog, and thinks that I should be playing WoW, especially as a Perl Software Developer/Techie/Geek/Nerd, please don't spam/flame me. I chose the life I live, and I am happy with Lego StarWars and other simple Wii games.

Rant over.

Friday 8 January 2010

I managed it.

So, finishing from the last post, I had to do another bit of file manipulation, both internally and readname, but I got the files back to how they should be. Now, hopefully, we have a pipeline that does what it is supposed to again.

No-one would ever break it again, no would they?

:)

Thursday 7 January 2010

Finally, I get somewhere, I hope

We can't expect things to stay static, but you would hope that when they don't, you find out with time to make changes, and that they tell you!

However, this doesn't always happen, and sometimes the changes are quite major. As any regular readers know, I work at a high throughput sequencing facility, with a major analysis pipeline running all the time. We need to be adaptable, and have become more so, but something changed which hit us hard.

All of a sudden, something that previously expected 3 inputs, suddenly wanted 2. Also, those two inputs are 1 and 3. And the description needed the middle part stripped out. And the files needed changing both name and internally. And ...

You get the picture.

The problem occurs because we are expecting certain outputs, in a certain way to produce the final files that our researchers are expecting. This was no longer going to be what we would get. AAHHH!

So, a think about it, a look around the code producing the files, and we come up with a strategy.

1) Rename the 3 files such that the middle file only psuedo exists to the next part of the pipeline.

2) Run the next part of the pipeline with only those two files.

3) Rename the files back to the original expected names, moving in the middle file again.

4) Keep our fingers crossed :)

So, first things first, rename the files. Now at this point, we think there are only 3 input files to 'Stage 2' as I'll call it. By stripping out the middle descriptor, and renaming, we run the pipeline.

Error...

The error, 'Stage 2' believes there to be a middle descriptor still. It turns out two files are created before 'Stage 1', in prep for 'Stage 2', still with the middle descriptor in it. 'Stage 2' then reads this file, and submits the middle descriptor to the scripts it kicks off. Pants.

So, change these created files. Unfortunately, this can't be done from the config file we create, so after creating the other files, go in and physically change them

Result! But then...

We thought it wanted 3 files, reduced to 2. Actually, it wants 6 reduced to 4. Another set of 3 files, are required.

So, rename those and Result.

I have now successfully run 'Stage 2'.

The next step is to get the stuff back to how we expect for the production of our own output files. Wish me luck, I'm going in.