Skip to main content.
Features: Calendar | Interviews | || Login

Interview with Jeffrey Thalhammer

Jeffrey Thalhammer is the creator of Perl::Critic and Pinto. He runs Stratopan, a paid service (currently in beta) that helps you manage private and stable private CPAN-like repositories. The Perl Review interviewed him in November 2013.

Perl

The Perl Review: What was the first thing you did with Perl?

Jeffrey Thalhammer: I studied economics in school, so my first job after university was at the Federal Reserve Bank. I wrangled data for economic forecasting models so Alan Greenspan could decide how to set interest rates. I had to crunch some numbers, like unemployment data for every county in Texas, and a colleague showed me some how to parse the CSV file with Perl. At that point, I was hooked.

TPR: Did you really work for Alan Greenspan?

No, not directly. My forecasts went to the San Francisco bank president, who sat on the Board of Governors. He and Alan and the other Board members used forecasts from all the Federal Reserve Banks to set monetary policy. It was very exciting. This was during the late 1990's and the US economy was changing rapidly back then.

TPR: Which version of Perl were you using at the time?

Jeff: It was 5.005_03. I remember because some of the newer machines had 5.005_04 instead, and that created headaches for us. It was probably because the machines had different Perl modules installed on them too. In those days, we didn't really know how to manage our Perl. Fortunately, we have much better tools now.

TPR: How did you learn Perl?

Jeff: I never formally studied computer science, so I mostly learned Perl through osmosis with my co-workers and by reading books. And there has also been a lot of trial-and-error. I have learned a lot by fixing (and re-fixing) the coding sins of my past, but that usually isn't the most efficient way to learn.

TPR: If you could go back and learn it again, what would you do differently?

Jeff: I would spend a lot more time reading the source code from others. Software developers don't spend nearly enough time reading code. In any other profession, mastery involves careful study of prior art. But software developers are constantly driven to produce rather than think. You have to strike a balance.

But there is a lot of code out there, and not all is worth reading. Especially when you are a novice, it is very frustrating to discover you've been following a bad example. Learning from mistakes is good, but avoiding them in the first place is better. I wish there was a directory of "great works of code" organized by topic and language. I think that would've helped me.

TPR: So what code have you read lately?

Jeff: I've been reading the source for the Mojolicious framework. Web applications are a bit mysterious to me, but the source code for Mojolicious is beautiful. It is very consistent and well organized. However, it is not an easy read for beginners. It uses rather dense expressions which are hard to understand if you're not fully fluent in Perl. But it is something to aspire to.

TPR: What advice do you give to new Perlers?

Jeff: Be patient. Perl is easy to learn, but hard to master. And remember that less code is always better than more. This means learning to write reusable code, or leveraging open source libraries, or just solving the problem without any code at all. A wise programmer knows when to write code and (more importantly) when not to.

TPR: Or experienced ones?

Jeff: Listen carefully to new Perlers. Try to understand the pain points they experience with your code, the language, and the community. Then use your expertise to build bridges rather than fences. For me, teaching Perl classes really opened my eyes, and it made me a better programmer.

TPR: Which version of Perl do you use for most of your work? Is there a difference between what you can use at work and at home?

Jeff:I'm using 5.16.3 at work right now. Other than maintaining my open source projects (which run on many versions of Perl) I don't use Perl at home. I don't do any "leisure programming" at all really. Outside of work, I am remarkably low-tech. My wife doesn't understand why I won't bother to figure out how to stream NetFlix to the television.

Perl::Critic

TPR: You're perhaps best known as the Perler who invented Perl::Critic. How soon after reading Damian Conway's Perl Best Practices did you start working on that?

Jeff: The idea struck me almost immediately. I was on a team that maintained a thorny legacy system, and we desperately needed to learn how to code better. But there just wasn't time to educate everyone because we were all too busy trying to keep the ship afloat. So the notion of applying Conway's guidelines programmatically seemed obvious at the time.

Of course, I thought it was the most brilliant idea of all time. I had no idea that every other language already had some kind of static analysis tool. I didn't even know the term "static analysis" until I was well into the project. I'm a bit surprised the architecture for Perl::Critic turned out so similar to other static analyzers. I must have done something right.

TPR: Perl::Critic is based on Adam Kennedy's PPI, a "good enough" static Perl parser. Were you already playing with that when you started PPerl::Critic?

Jeff: It was pure luck that I found PPI on CPAN right around the time I finished reading Conway's book. Up until that point, I had never spent much time browsing CPAN. Now I probably spend 30 minutes every day on CPAN, looking for code that will make the other 7 1/2 hours of the day more productive. One of my favorite quotes from Matt Trout: "Perl is my virtual machine, but CPAN is my language."

But yes, PPI is what made Perl::Critic possible, so I owe a lot to Adam Kennedy. Without PPI, I would have to invent a parser & lexer for Perl (which I'm not smart enough to do) or write a lot of regular expressions (which would be impossible to maintain). PPI has also been the key to many recent projects, including Padre, an IDE for Perl, and Dist::Zilla, a very powerful tool for authoring Perl modules.

TPR: Do you still use your own tool?

Jeff: Absolutely, but I use a much less restrictive configuration now. I trust myself and my colleagues more, so we give ourselves some latitude. But that doesn't mean every team should do the same. Code should be written for humans first, and computers second. So you have to remember your audience. If the people maintaining the code have a certain background or a certain skill level, then you should configure Perl::Critic to cater towards their perspective.

TPR: What Perl::Critic policy personally annoys you the most?

Jeff: Perl::Critic is always barking at me to add the /xms modifiers on regular expression to make them more readable and behave the way most people expect. But I feel those modifiers just aren't always appropriate. They also can dramatically change the meaning of the expression, so you can't just add modifiers willy-nilly.

On the other hand, my favorite policies are the ones that limit complexity. By far, these have been the most helpful. If your code is organized into small, simple bits with good names then you can overcome any maintenance problem that comes along. You can disable all the other policies if you truly disagree with them, but these ones alone make Perl::Critic worthwhile.

Perl mongers

TPR: You live in San Francisco. What's the Perl culture in the Bay area?

Jeff: San Francisco is very fashionable, so programming languages are partly driven by popular trends here. Within the city, Perl doesn't get much attention. But the leadership of the SF Perl Mongers is working hard to change that. There are going to be more frequent meetings and more collaboration with rest of the tech community. I'm very excited about it.

TPR: What's you're feel for Perl's popularity in Silicon Valley?

Jeff: Perl has more traction down in the Valley. Yahoo uses a lot of Perl. I wrote some for Apple (yes, they use Perl too). And there is Ariba, WhiteHat, and countless others. There are so many tech companies in the Valley, and Perl is used in one way or another at just about all of them.

TPR: You recently went on a "road show" to several Perl Mongers groups. How did that go?

That was a lot of fun. Most recently, I met with the Los Angeles Perl Mongers to talk about Pinto. And while I was there, I gave the same talk to several local Perl shops. If you prepare a good presentation and make an effort to reach out to people, they usually want to hear what you have to say. I love that about open source.

TPR: Which other groups would you like to visit?

Jeff: I'll take any excuse to travel. I'd like to visit the Perl Mongers in Portland and New York and London—they are all very active. And I would love to reach out to the smaller or newer groups in other countries—Brazil, India, China. In think these up-and-coming markets are an important part of Perl's future.

Playing the field

TPR: What other languages have you tried since learning Perl?

Jeff: I've done work in Java, JavaScript, and Objective-C. I really like Objective-C, and the frameworks that Apple provides are excellent, once you learn your way around. JavaScript is a lot like Perl—it is a very "loose" language. But that is starting to change as frameworks like Angular and Backbone become more popular. The same trend is happening in Perl too. Moose and perl5i are making the language more formal and giving us a higher level of abstraction.

TPR: What keeps you coming back to Perl?

Jeff: I like Perl because it doesn't get in my way. Like all languages, it has some warts. But Perl helps me get a lot done with very little effort. And I think that is what brings me the most joy in this profession—much more so than creating an efficient algorithm or designing an elegant interface. But I'm not wed to Perl either. It is important to choose the right tool for the job, and these days, there are plenty to choose from.

CPAN

TPR: What was your first CPAN contribution?

Jeff: Perl::Critic was actually my first contribution. I had never even submitted a patch or filed a bug report before. I do that all the time now, but it all started with Perl::Critic.

TPR: I think you invented the idea of a private CPAN. What was the story behind that?

Jeff: I didn't invent the idea. There have always been CPAN mirrors. Facebook has one. Most universities have one. I think Playboy once had one. There are thousands of them. You can think of all those as private CPANs, and each of them is a very large pile of code.

But Randal Schwartz was the first to talk about mirroring just the "tip" of CPAN, that is, only the latest versions of everything. That's a much smaller and more manageable pile of code. Soon after that, modules like CPAN::Mini and CPAN::Site were created to help you create and update this little private copy of CPAN.

My big contribution has been the idea of using the CPAN structure, but only filling it with the modules you actually use or produce. That's an even smaller pile of code. And when the pile is small enough, you can start to do some really interesting things with it.

TPR: What's the practical problem you saw with tracking the live CPAN?

Jeff: The problem with the live CPAN is that it changes all the time. Hundreds of new or updated modules enter CPAN every day, and the toolchain is designed to always install the latest available version of a module. This means you might never build exactly the same application from one day to the next. That's a very big problem. If you're delivering a production application, you want the code to be exactly the same regardless of whether you built it on a Monday or a Tuesday.

And the problem only gets worse over time. Almost all projects will gradually accrete CPAN modules. Unless you keep really good records or stash those modules and all their dependencies somewhere, then you quickly loose track of what is actually in your application. So when it comes time to change operating systems or move to a new version of Perl, you have no idea what you actually need to build your application. You become frozen because you can't risk upgrading everything at once. This is the point where many applications will stagnate or just die. But it doesn't have to be that way.

TPR: My own work with MyCPAN indexing was a result of the stuff you had started. I only wanted to index and catalog, but that's only part of the solution. What else needs to happens?

Jeff: Your work on MyCPAN helps solves the problem of knowing your Perl stack. It gives you a way to reverse-engineer the environment and figure out what modules you need. So that will help bail you out of trouble. But you still need a way to manage your modules going forward, so you don't get into trouble again. This is where a private CPAN really shines.

With a private CPAN you can incrementally move your modules forward, but only when you decide to upgrade, not whenever the author ships a new release to CPAN. And since your private CPAN will have all the original distributions, you can easily rebuild your application from scratch on a new OS or a new Perl without having to dig up old releases or worry about patches you might have made.

TPR: You created Pinto as a private CPAN management tool. Tell us about that.

Jeff: Pinto takes the private CPAN concept and puts a convenient interface on it. It has several commands for managing your CPAN modules (and your own homegrown modules too). It also functions as a version control system, so you can look back at what has happened over time and identify which change introduced a problem. When you get into trouble, you can roll back to an earlier state.

I think the most powerful feature of Pinto are the "stacks". In a typical CPAN, there is only one index. So effectively, there is only one version of any given module at a time. But with Pinto you can have any number of indexes, each called a "stack". This means Pinto can have many different versions of the same modules, so you can experiment with different sets of modules in isolation. A Pinto repository can support different versions of modules for each stage in the development cycle (dev, test, prod, etc.) or for different operating systems, or for different versions of Perl. There are a lot of possibilities.

TPR: What's the feedback been like for that?

Jeff: Fantastic. Pinto has been adopted by several companies and is used to manage some really complex production environments. And as always, the Perl community has been supportive. There is an active group of contributors making Pinto better all the time. An ecosystem of supporting software has started to emerge too. There are some tools for integrating Pinto with version control systems, and for automating deployment with Puppet or Chef. The Perl Foundation also approved a grant to develop some new features for Pinto.

Ironically, I think the one thing that holds Pinto back is its size. It is a full-scale application with lots of dependencies and a real database inside. It's actually the kind of thing that you would build for your own enterprise. But we've made it easy to install with just one command. It all builds into a single self-contained directory so Pinto still works even if you break the rest of your Perl environment.

TPR: Now you're running Stratopan, a hosted version of Pinto. Who do you think can benefit from that?

Jeff: Even though Pinto is easy to use and has great documentation, it is yet another tool in your development process that requires care and feeding. So rather than maintaining your private CPAN locally with Pinto, we can host it for you on Stratopan.com. With Stratopan, there is nothing to install or maintain, and you get a slick web interface. It all works seamlessly with the CPAN tool chain so you can install your modules anywhere. Stratopan is in beta right now and most of the current users are individual developers. But I think medium to large businesses have the most to gain from Stratopan. Many companies have legacy applications that provide a lot of value but carry rising maintenance costs. A big part of those costs is dealing with CPAN modules. Those businesses face a dilemma: either upgrade modules and risk introducing new bugs, or stay on their current modules and pay the cost of creating or fixing things that are on CPAN for free. Stratopan provides a very convenient way to control those risks and costs.

Personal

TPR: Did you always think that you'd be a programmer?

Jeff: No, I never expected it. I didn't really appreciate his work until much later in life, but my father is also a software engineer for the National Laboratory. So programming is probably in my genes anyway.

TPR: How might your life had turned out if you didn't sling code for a living?

Jeff: I would have liked being a carpenter. In college I spent a couple summers doing metal stud framing and hanging drywall. I always felt it was the most honest and productive work that I ever did. And I loved the crew—those guys really knew how to work hard and play hard.

TPR: What's a typical work day for you like?

Jeff: Stratopan.com is my full-time job so I spend most of my day developing for the site, handling support requests, or executing our marketing strategy. And then there is all the administrative work that goes with running a business—dealing with accountants, lawyers, vendors, etc. The rest of the day is spent working on Pinto or Perl::Critic, usually helping new contributors or responding to issues on GitHub. But I always leave some time to go on an adventure with my son, Wesley.

TPR: Tell us about your favorite Perl t-shirt.

Jeff: That would definitely be my YAPC::NA 2006 shirt, from Chicago. It has the three stripe Chicago flag on the front, but instead of stars it has the Perl sigils: $ @ % &. Every time I wear that shirt some Chicagoan will come up and say "hi", so it has led to a lot of interesting conversations. That was also my first YAPC and Chicago is one of my favorite cities.