Author: ccornutt

“Done done”

As I’ve been reading through “The Art of Agile Development” (from O’Reilly, by James Shore & Shane Warden) there’s been a whole load of great suggestions for development practices and not all of them are restricted to an “Agile Only kind of world. One of the major ones that’s stuck in my head (and is repeated quite a bit in the book) is the idea of being “done done” in your development.

The average PHP developer seems to say they’re “done” when the code works from their simple testing, usually just entering values to ensure that things work correctly. I hate to break it to them, but this shouldn’t be considered done. Getting the code working correctly is only the tip of the iceberg – there’s so much more to do to get everything ready to deploy. Besides the code being complete, “done done” also means:

  • Having documentation, both in the code and about the code. If future generations of developers were to come around and need to update your code, good documentation is a must. One line comments aren’t going to cut it…at the very least use something like the formatting for phpDocumentor to provide some automated docs
  • Making those unit tests pass every time is essential. (You do unit test, don’t you?) Writing tests for all the bits and pieces of your application may be a pain, but it can save you in the long run. Imagine being able to fire off a test suite and being confident that those changes you just made haven’t broken anything.
  • The code has to work in the build for your project. Some projects are small enough to where they don’t really need a build, but if your project is much more than a simple site with a database backend, you could benefit from a build[1]. Developers should be able to do local builds to check their work prior to a commit and push.
  • I must fulfill what the customer wants in the update or new feature. Sometimes, it just happens – you get to writing your code and you think about “this one cool feature” or “that other fun thing” that PHP makes so easy. You wonder why the customer didn’t want that in the first place. Be very careful with thinking like this, it could lead to some pretty random stuff in the end. In the end, the customer needs to be happy with what you give them – be sure it’s what they asked for (user acceptance testing).

Being “done done” means that, if you handed over your code to the testers or even the end customer, you’d be confident that it works and is ready to be integrated. Sure, it’s about the code being tested and correct, but its also about making it behave as a part of the larger whole. Your code (code ownership is a whole new can of worms) shouldn’t be viewed as “this part that does this” but more of a piece to the larger puzzle of you/your team’s application.


[1] What’s a build? A build lets you automate several things you might currently do by hand to get your code ready for production. This can include things like: building out databases, minifying code, running unit tests and checking syntax on all files to be pushed.

Working with WebTests (Help?)

Well, I wish I could say that this was going to be a guide to getting WebTests (an automated front-end testing tool that sits on top of Ant) but it seems as though I’m having a bit of a Java issue to content with myself.

Here’s what I’ve done so far:

  • First, I noted happily that Ant was already installed on my MacBook’s OS X install (seems a strange choice to have bundled, but who am I to judge), making for one less thing that I’d have to get up and running
  • Next I grabbed the latest install of WebTest from the Canoo website and unpacked it to a testing directory. It comes as a nice, neat binary so there’s no messy compiling to worry about.
  • I wrote up a simple test and PHP page to try it all out:
    [php]

    [/php]

  • Finally, I gave it all a whirl: “bin/webtest -buildfile /my/path/mybuild.xml

This is where I hit the first snag…apparently since OS X’s Java install is just a bit different than some of the normal ones, a few of the files aren’t where they need to be. In my case, I was getting an error:

BUILD FAILED
/www/tests/mytest.xml:5: 
java.lang.NoSuchMethodError: 
org.apache.xpath.compiler.FunctionTable.installFunction
(Ljava/lang/String;Ljava/lang/Class;)I

After poking around a bit on the web, I happily came across this that recommends copying a jar file over to the Java extensions directory. A simple “cp” later and the build made it past that point. Unfortunately, not much past it, though – another Java error popped up that seems a bit more difficult to find information on:

BUILD FAILED
/dev_tmp/webtest/webtest.xml:265: 
java.lang.NoClassDefFoundError: 
org/apache/xml/serializer/ExtendedContentHandler

So far I haven’t been able to find much that’d help me solve this one, so if there’s anyone out there that’s seen it before, help would be appreciated. It seems like it actually runs the test just fine (according to the output reports) but it’s not very useful if the build always fails.

MQ+PHP – Linking IBM’s WebSphere MQ to PHP

During a recent project at work I had to get PHP linked with IBM’s WebSPhere MQ software we have running on another internal server. Our goal was to use our existing web service to take the requests from external vendors and push their XML data back into the queue inside our firewall. Thankfully there’s an extension in PECL that does just that.

Here’s the basic steps I took – hopefully it’ll be useful to someone else out there in the same spot I was. This all assumes you’re working on a web server that doesn’t have an MQ server installed already:

  • Get the extension: Head over to the PECL page for mqseries and download the latest version. Unpack it into a directory on your local server
  • Get the MQ client libs: You’ll need to go to IBM’s website to download the latest client/libraries for your install (you’ll need an IBM ID to get to the downloads):
    • Go to the IBM page for the MQ client listing
    • Look for the “WebSphere MQ Clients” link under the “Related products and technologies” section and click on it
    • Scroll down to the “Download Package” section and choose from one of the mirror locations
    • Select your package from the list (I went with “Linux for System x86” for our setup)
    • Click on the download link and fill out some required information (you didn’t think you were getting off that easy, did you?)
    • Agree to the terms and conditions and you’ll get a “Download Now” link
    • Drop the archive file (tar, tar.gz, etc) into your server and unpack into a temporary directory (mine had an issue unpacking into the local directory, not a subdirectory)
  • Install the package(s): Once you have the IBM software extracted, you should have a series of packages. You’ll need to install the “MQSeriesSDK” to get the right libraries in place to compile the PHP extension
  • Build the mqseries extension: Go into the mqseries directory and run “phpize”, “./configure” and “make” to create the .so file. The process should drop it into the default extensions directory.
  • If needed, move it: Be sure that the shared module for the extension is in the right directory for the PHP install to find it. (You can make a phpinfo() page if you’re not sure where that is.)
  • Update your php.ini: Add in a line to include the extension in your current setup. Remember, after any changes to the php.ini, you need to restart the web server.

Now for the fun part – if everything’s working and the extension shows up in your phpinfo() as active, give this script a shot and see if you can connect to your MQ server:

[php]
$mq_host_ip =’127.0.0.1′;
$queue_name = ‘HOST.REMOTE.Q’;
$mq_server = ‘WBRK_QM_U49’;
$mqcno = array(
‘Version’ => MQSERIES_MQCNO_VERSION_2,
‘Options’ => MQSERIES_MQCNO_STANDARD_BINDING,
‘MQCD’ => array(
‘ChannelName’ => ‘CLIENT.CHANNEL’,
‘ConnectionName’ => $mq_host_ip,
‘TransportType’ => MQSERIES_MQXPT_TCP
)
);

// Connect to the MQ server
mqseries_connx($mq_server,$mqcno,$conn,$comp_code,$reason);
if ($comp_code !== MQSERIES_MQCC_OK) {
trigger_error(‘Cannot open connection to server: ‘.$mq_server,E_USER_ERROR);
}else{
echo ‘Connection good!’;
}
[/php]

Obviously you’ll need to adjust the settings to fit your server, but at least this gives you a start.

Thinking Agile

Along with some of the fun new things I’ve been working on it regards to development and deployment, I’ve also been reading up on the agile development methods. I’m only getting started and I can already tell you – it’s not like anything you’re used to (well, unless you’ve already “gone agile”, of course).

Despite all of the hype right now around Scrum, I figured I’d get my feet wet with the agile concepts with Extreme Programming first. I’m assuming that most of the principles will be similar with varying implementations between the two. My “manual” of choice to get started with has been O’Reilly’s “The Art of Agile Programming” (James Shore, Shane Warden) and it’s been an eye opener.

See, I’ve come from a place where, I imagine, most developers out there are coming from and probably still will be in the future. You spend months gathering requirements, you estimate the times, you set the deadlines – all very structured and, depending on who you’re doing the work for, potentially wasteful. We’ve all experienced the frustration of changes to requirements set at the beginning of the project. Things tend to explode when someone changes “one small thing” and the entire development track is suddenly ripped into pieces and put at the complete mercy of what the customer wants.

In short, it sucks.

From what I’ve gathered so far, the whole concept behind agile programming is to prevent things like this. It makes it simpler to move around in the project and change things that might need changing. Work is done in sprints instead of one long development process and the client/business representative selects the things that will be headed into the next development session. Requirements are more fluid and testers don’t have to wait until everything’s done to find where things break.

I’m definitely still in the learning process, but so far, this agile process doesn’t seem half bad. I just wish it didn’t require such a large change in the processes of the surrounding company. I might be tempted to suggest it around my office…

Speaking in the Fall

With the announcement of the speakers for this year’s Zend/PHP Conference it seems I’ll be giving three talks this fall (in the span of two months):

First at CodeWorks 2009 (Dallas) I’ll be giving a talk on best practices, standards and tools to help with both in your PHP development:

  • “B,S,T…Easy as 1,2,3”

The other two will be at ZendCon (in San Jose). They’re on two different topics:

  • “Taming the Deployment Beast” – looking at some of the development and deployment practices that can make releasing your code simpler
  • “Right Where You Belong (The PHP Community)” – no matter what your skill level or area of focus, everyone has a place they can call their own in the PHP community. This talk highlights a few of them.

Hope to see you all there! Here’s more info on the two conferences: CodeWorks (Sept. 26th-27th in Dallas) and ZendCon (Oct. 19th-22nd in San Jose)

php|tek & our community

This past week I was fortunate enough to attend this year’s php|tek PHP conference up in Chicago. It was four days packed with some great PHP-related content and included two major events for the PHP community – a “standrds session” and a meeting of several core developers of the language to hash out some standing issues face-to-face. The week saw a nice blend of both sessions related directly to the language an more periphery topics like MySQL tuning, project management and version control systems.

In the midst of all of this, there was something else I saw that, I have to admit, slightly caught me off guard. See, I’ve been a part of the PHP community for years now and I’ve seen some of the good and the bad along the way. Last week I saw something that gave me hope about the community and the future of the language – I saw that PHP (and its community) is growing up.

Back when I first started out in the community, PHP wasn’t taken quite as seriously as it is now. Sure, there’s those that dismiss it as “one of those languages” that’s not ready for anything more than small sites or little jobs. The truth is, PHP is running lots of major sites out there and running them very well. A shift in perception like this is all well and good but, especially with an Open Source language like this, there needs to be the people there to back it up. Lots of the other languages have corporate backing to keep them going, but PHP is in a strange place. Zend, the company most people associate with PHP, is not a direct supporter of the PHP project. They provide resources (and some of their staff) to help work on the language, but it’s not a direct involvement as a “sponsor”. Instead, PHP relies on the strength of those in the community to support it through those good and bad times and to help it weather any major storms that might come its way.

At php|tek, through all of the sessions and after-hours activities (oh yeah, we like to party), I could still see people stepping up to fill in spots that were needed. You could see it in the leadership of the core development team, in the community support both from countries here in the U.S. and overseas and, most importantly, in the words and actions of long-time members of the community willing to take the steps needed to get the community flowing. I’ve seen these people come from humbler beginnings – some joining the community after me – to become a strong foundation for the rest of the group to build on.

PHP’s future is bright with this solid crew at the helm. To all of you who have made and are making PHP and its community what it is today, I thank you.

Slides for my php|tek talk: “No Really, It’s All About You”

I’ve put my slides for my framework presentation from this year’s php|tek conference – “No Really, It’s All About You” comparing CakePHP, CodeIgniter, Solar and the Zend Framework – up on Slideshare:

Unfortunately, no one was there to record the resulting “discussion” that came from the questions after – heh.

Working towards a better deployment (Part 4)

I’m back with the fourth part of my look at deployment of PHP applications to focus on the actual deployment technologies. I’ve already talked some about version control, build tools and unit testing in the previous parts of the series and this new information should round it out nicely with a simple approach to the push part of the deployment.

As it stands right now, the final step in our deployment is one of the simplest. We have some bash scripts that are aliased to rsync commands for pushing files out to each of our different sites. Since the assumption is that anything that’s in the QA/staging area is good to go, the rsync commands take everything that’s out on production and, if there’s any differences, overwrites the remote version. It’s light, simple and makes it easy to push out an entire site in one go. Things can get a bit tricky of you only want to deploy a portion of the site at a time though. There haven’t been many cases where we needed to do this, but it would have been nice to have a way to break it up a bit more.

I posted two TwtPolls in the past few days asking some of the other developers in the community (some outside of PHP too) about which method they preferred for deploying their sites out to a production server. Here’s the results of the latest poll (hopefully it’ll stick around for a while – I’ll mention the numbers too, just in case).

There were 45 votes total on ten different options…here’s how it came out:

FTP/SFTP [ 20% (9 votes) ]
SSH/SCP [ 4% (2 votes) ]
RSYNC [ 9% (4 votes) ]
SVN (via ssh or export) [ 49% (22 votes) ]
Capistrano [ 7% (3 votes) ]
Ant [ 2% (1 votes) ]
Cut & Paste [ 2% (1 votes) ]
Other Manual Process [ 2% (1 votes) ]
Other Automatic Process [ 2% (1 votes) ]
A mashup of several [ 2% (1 votes) ]

As you can see rsync, our current choice, only came in third under FTP/SFTP and the overwhelming favorite, Subversion deployment. Some of them choose to make the copy of the code on the production server a checkout of the current version (svn up when it needs refreshing) and some export from their current version and push those files out. Unfortunately, we’re still stuck with CVS for the time being (coming soon, hopefully!) so working with branching, tagging and all should be quite a bit easier.

This final step can either be a manual one (how we do it now) or as a part of another process (how I want to do it). I currently have a final step in my Phing build file that, if all goes well during the build, will call one of our bash scripts to push the code directly from there. This helps to keep things nice and tidy and makes it a simple “single process” tool that sysadmins or QA folks can use to ship out the code that’s been cleared for public consumption.

Now that I’ve gotten to the end of the descriptions on all of this stuff, let’s head back to the beginning and look at some of the actual code, configurations and technologies related to deployment. I’ve seen some good suggestions as to a step I’ve been missing – optimization. Keep an eye out for this first in the “Deployment Tech” series, a continuation of this general look at deployment of PHP applications.

Working towards a better deployment (Part 3)

So, on the to third part of my little series (part one, part two) on making a shiny new deployment system for our sites. What’s that? Yep, I said sites – plural. I’m working all of this up on a “one site” concept, but in reality we have five or six websites that this process will eventually apply to. They’re all pretty similar, thankfully, and all use a lot of the same libraries. This makes the next step of my little process so much easier – the unit testing.

If you’re not unit testing your code, drop what you’re doing (well, finish this article at least) and go learn as much as you can about it. It can help take that feeling of dread away from your code updates and can help make things go a bit smoother in your deployment. Imagine having an automated way to check and ensure that the results of your scripts/libraries are still working exactly how the should. Unit tests can help you verify that things that should pass the tests still do and that your failures still fail in all the right ways.

One of the more popular unit testing softwares out there is PHPUnit (from Sebastian Bergmann). Its a super-easy way to set up a suite of tests that can be run as a part of your deployment process. The tests run on your scripts with methods corresponding to each of the methods in the target class. You use assertions inside the tests to check for things like pass, fail, true, false, if an array value is set, etc. The methods are called and the output is verified – if everything’s good, the test passed. If there’s a problem (something non-standard) the test fails which can cause the entire build process to fail too.

You can test more than just simple single method calls too. Say you have a class that creates a user by logging them in and setting up a session for them. If your tests require this, you can make the user session in your “setup” method and use the “teardown” method once things are finished to log the user out and destroy any of the resources you might have created.

These unit tests have found a home in my build process right after the lint check on each of the new files. Phing allows you to defne a tests directory for it to look in for any unit tests it should apply. The PHPUnitTask takes care of the dirty work with options to fail the build if the tests have an error and gather code coverage information as they’re tested. You can also set up a PHPUnitReport task to get more information out of your build. This makes reports and, with the help of some XSLT templates, transforms the results into something n bit more human readable.

All that’s really left after this (with the exception of a few other options) is the push of the up-to-date, tested code out to the production server…but that’s for the next article to cover.

Oh, and for those that are wondering, I’m going to be getting to the tech behind all of this pretty soon. Having information about the setup is one thing, but having the actual setup behind it all helps even more. I’m going to go back over all of this from start to finish to show you how it all fits together. Just hang in there with me!

Working towards a better deployment (Part 2)

In the previous part of this series, I looked at some of the current methods we use for code development and deployment as well as some of the goals I put out there for myself in the quest to find a good deployment system that would work for us and our code. I’m back again to look at the next part of the process – the build.

Once you have the process defined for the developers to follow, you need to lay the foundations for the administrator to handle the rest of the process. Once we know that the code is good and all is right with the world, the deployment phase can get started. Ideally, the admin shouldn’t have to do much at all – well, besides making sure the right code gets merged into the trunk, that can be a little tricky. A correctly defined process should handle the rest, though – running tests on the code to ensure correct functionality, locating any syntax errors it might have, etc – all automagically.

The tool I’ve chosen to act as a base for all of this is Phing, a build tool based on Apache’s Ant that uses XML config files and PHP classes for its “tasks” (both interfaces with external programs and internal data handling methods). This is a perfect setup for PHP deployment and makes it simple for just about any admin out there that’s even vaugely familiar with PHP to extend. Our Phing setup has four goals:

  • Grab the latest version of the repository after its been through QA
  • Run a lint check on it to ensure there’s no syntax errors
  • Run a series of unit tests on the code
  • And, if there were no errors, deploy the code out to production

Each of these steps have the “failonerror” value set to true, so the build will fail if even one check is wrong (and, come to find out, unit tests marked incomplete will cause failures).

Despite what the current Phing documentation says, there is a task for CVS integration that lets you run commands and define parameters. Our first step uses this to check out a clean version of the code to a working directory. Since the build is run as the dedicated web user on the machine, the permissions should be correct, but I used an two exec tasks to ensure the owner and group settings were right. I had to do a little hackery here to keep things a little lighter. My next target (action) in the build file is to lint the files to be sure they don’t have syntax errors. Unfortunately, by default, this would check everything in our repository. The files for our external site don’t number too many, but our internal site with several applications and documents inside is quite a bit larger. This resulted in the linting process to take quite a while and make the build process up to five minutes long.

The hack come in on the CvsTask file that’s included with Phing – I modified the code to find only the files that were updated in the latest “cvs update” call and return that list back to the build to loop through with a foreach. This saves a ton of time and only checks the files that need it – just the files that have changed. We’re just looking for syntax errors, after all.

Next up is unit testing…more to come!