The Life Of A Java Developer
Wednesday, April 4, 2012
Tuesday, April 3, 2012
What today's software developers need to know
Today's software developers don't have to worry about many things that
their predecessors used to, like coding to minimize
RAM consumption even if it means
significantly longer execution time, or WAN connections maxing out at
14.4 kilobits per second.
(Although, there may be some out of the fashion skills they could benefit from or that may yet regain relevance.)
However, the reverse is also true: there are many new skills and areas of expertise that today's software developers, hardware developers, system and network administrators, and other IT professionals need that simply didn't exist in the past. (Where "the past" could be anything from "more than three months ago" to five, ten, twenty or more years.)
Knowing what you need to know matters, whether you're just starting out as a software developer (or planning to become one), or are a "seasoned" professional who wants to keep your chops fresh so you can stay in, re-enter, or advance.
So here are what software developers that should add to their existing knowledge portfolio.
"Programmers don't learn that someone else is going to take care of the code they write," criticizes Sarah Baker, Director of Operations at an Internet media company. "They don't learn about release management, risk assessment of deploy of their code in a infrastructure, or failure analysis of their code in the production environment -- everything that happens after they write the code. They don't learn that a log is a communication to a operations person, and it should help an operations person determine what to do when they read that log."
However, the reverse is also true: there are many new skills and areas of expertise that today's software developers, hardware developers, system and network administrators, and other IT professionals need that simply didn't exist in the past. (Where "the past" could be anything from "more than three months ago" to five, ten, twenty or more years.)
Knowing what you need to know matters, whether you're just starting out as a software developer (or planning to become one), or are a "seasoned" professional who wants to keep your chops fresh so you can stay in, re-enter, or advance.
So here are what software developers that should add to their existing knowledge portfolio.
Using libraries
"One thing that strikes me as a new skill is the need to work with massive pre-packaged class libraries and template libraries in all the new languages, like Java or C++ or Python," says consultant and software developer Jeff Kenton. "It used to be that once you knew the language and a small set of system calls and string or math library calls, you were set to program. Now you can write complex applications by stringing library calls together and a little bit of glue to hold them all together. If you only know the language, you're not ready to produce anything."Asynchronous programming and other techniques
"Because of the move to cloud computing mostly through web-based interfaces, we are seeing an emphasis on asynchronous programming," says Itai Danan, founder of Cybernium a software development and web design consulting company. "Ten years ago, this was mostly used by transactional systems such as banks, hotels and airline reservations. Today, all but the simplest applications require asynchronous programming, mostly because of AJAX. This is a very different style of programming -- most things taught about software optimizations do not apply across the network boundary."A breadth of skills
"It's become more important to have a breadth of skills" says Ben Curren, CoFounder, Outright.com, which offers easy-to-use online accounting and bookkeeping software for small businesses. "For example, web developers these days need to understand customers, usability, HTML, CSS, Javascript, APIs, server-side frame works, and testing/QA.""Programmers don't learn that someone else is going to take care of the code they write," criticizes Sarah Baker, Director of Operations at an Internet media company. "They don't learn about release management, risk assessment of deploy of their code in a infrastructure, or failure analysis of their code in the production environment -- everything that happens after they write the code. They don't learn that a log is a communication to a operations person, and it should help an operations person determine what to do when they read that log."
Agile and collaborative development methods
"Today's developers need to have awareness of more agile software development processes," says Jeff Langr, owner, Langr Software Solutions, a software consultancy and training firm. "Many modern teams have learned to incrementally build and deliver high-quality software in a highly collaborative fashion, to continually changing business needs. This ability to adapt and deliver frequently can result in significant competitive advantage in the marketplace.Developing for deployability, scalability, manageability
"Sysadmins are likely to own the software for much longer than the developers -- what are you doing to make their stewardship pleasant enough that they look forward to your next deployment?" asks Luke Kanies, Founder and CEO of Puppet Labs: "This includes deployability and manageability. New technologies are often much more difficult to deploy on existing infrastructure because developers haven't had time to solve problems like packaging, running on your slightly older production OS, or connecting to the various services you have to use in production."Friday, March 23, 2012
How to shutdown an ExecutorService
The new executor framework in Java 6 makes it dead easy to create components running in a background thread. Just create an executor, give it a java.util.Runnable and that's it. But how do you do a proper shutdown of a ExecutorService?
The next step is to wait for already running tasks to complete. In this example we will allow the running tasks to complete within pTimeout seconds. If they don't complete within the given amount of seconds, we invoke shutdownNow(). This method will invoke interrupt on all still running threads.
As a good practice we also make sure to catch InterrruptedException's and shutdown everything immediately.
1. pExecutorService.shutdown(); 2. try { 3. if (!pExecutorService.awaitTermination(pTimeout, TimeUnit.SECONDS)) { 4. pExecutorService.shutdownNow(); 5. } 6. catch (final InterruptedException pCaught) { 7. pExecutorService.shutdownNow(); 8. Thread.currentThread().interrupt(); 9. } 10. }First we invoke the shutdown method on the executor service. After this point, no new runnables will be started.
The next step is to wait for already running tasks to complete. In this example we will allow the running tasks to complete within pTimeout seconds. If they don't complete within the given amount of seconds, we invoke shutdownNow(). This method will invoke interrupt on all still running threads.
As a good practice we also make sure to catch InterrruptedException's and shutdown everything immediately.
Monday, March 12, 2012
What should you cache?
A good way of solving performance problems in an application is often to add caching at strategic layers of the application. But what should you cache?
For me, the single most important thing to cache is to everything that makes a network request.
Performing a network request will always have an overhead caused by the TCP/IP protocol, network latency, the network cards and the Ethernet cables. Even the slightest network hick up might cause huge performance issues in your application. A slow database will seriously decrease the performance of your application.
It is often not possible to cache everything that makes a network request, but not doing so should at least be a conscious decision and not just something you forgot to implement.
For me, the single most important thing to cache is to everything that makes a network request.
Performing a network request will always have an overhead caused by the TCP/IP protocol, network latency, the network cards and the Ethernet cables. Even the slightest network hick up might cause huge performance issues in your application. A slow database will seriously decrease the performance of your application.
It is often not possible to cache everything that makes a network request, but not doing so should at least be a conscious decision and not just something you forgot to implement.
Friday, March 9, 2012
Public methods and package private classes
Given these two classes
both in package org.mydomain. What will happen if I create a new instance of ConcreteClass in another package and try to invoke doSomething()? Will that work?
The answer to this question is it depends on the JDK.
The Sun JDK allows you to access a public method in a package private class. However, OpenJDK will throw
I'm not sure what the JDK specification says about this, but the moral is
Do not have public methods in package private classes.
class abstract AbstractClass { public void doSomething() { System.out.println("Hello world"); } } public class ConcreteClass extends AbstractClass { }
both in package org.mydomain. What will happen if I create a new instance of ConcreteClass in another package and try to invoke doSomething()? Will that work?
The answer to this question is it depends on the JDK.
The Sun JDK allows you to access a public method in a package private class. However, OpenJDK will throw
java.lang.IllegalAccessException: Class MyClass can not access a member of class ConcreteClass with modifiers "public"
I'm not sure what the JDK specification says about this, but the moral is
Do not have public methods in package private classes.
Saturday, March 3, 2012
Where does Node.js stand?
I recently became aware of Node.js and I'm trying to sort out
where Node.js fits in the server side development picture. I found a few
introductory videos from Ryan Dahl
which sort of gave me the impression that Node might be the way of the
future. So naturally the first thing I did from there was to Google
"Node.js sucks". And of course, like anything that anyone thinks is
good, somebody has to explain why that first guy was totally wrong.
Whenever I hear the type of argument where one side says "X is the best
possible," while the other side says "X is the worst possible," I always
assume that X is very specialized – it's very good at doing something
that people who like it need to do, but others don't. What I'm trying to
put my finger on is just what exactly does Node.js specialize in?
So as I understand it Node.js has a few things that make it a lot different than traditional server-side development platforms. First off Node code is basically JavaScript. Client code running on the servers? That's weird. Also, I shouldn't have said servers (plural) because Node.js requires a dedicated HTTP server – just one server, and it's got to be HTTP. This is also weird. Node's got some clear advantages though. It's asynchronous and events-based, so theoretically Node applications should never block I/O. Non-blocking I/O might make Node.js a powerful new tool for dealing with giant message queues, but maybe it's got more working against it than just being weird.
I think the guys that say Node.js sucks sound kind of crazy, but they do have a point or three. First and foremost is that Node.js is single threaded; then the detractors have a problem with the similarities Node.js shares with JavaScript; and finally, they say that Node.js cannot possibly back their claim of being blockless.
Addressing the concern about JavaScript is tough for me. I'm not an expert with JavaScript and I don't really know its advantages and disadvantages over other languages. I have read detractors state that JavaScript is a slow language because it is scripted and not compiled. I have read JavaScript proponents explain that it's not the language that is either slower or faster, but the way the code is written, meaning that the skill of the coder supersedes the inherent qualities of the language. Both arguments have merit, and I don't feel qualified to pick a winner.
Most server-side developers are very used to running basically linear processes concurrently in separate threads. This method allows you to run multiple complicated processes at the same time, and if one process fails, the other threads can still remain intact. So having a single thread run one process at a time sounds like it would be really slow. I don't think this is the case with Node.js because it is asynchronous and event based, which is a very different model than one might be used to.
Instead of running one process, waiting for the client to respond and then starting another process, Node.js runs the processes it has the data to run as soon as possible in the order it receives them. Then when the response comes back that's a new process in the queue, and the application just keeps juggling these requests. The overall design is such that Node developers are forced to keep each process very short because – as the detractors are quick to point out – if any one process takes too long it will block the server's CPU which will in effect block the application.
So you can't do long complicated processes like calculating pi with Node.js. Apparently they have workarounds for spinning off really complicated processes if you really need to, but that seems to be outside of the scope of the original plan. I think that where Node shines is in routing a high volume of low-overhead requests. Which means to me that Node.js is great for light messaging applications with a high user volume.
Are there other uses I've missed? Are there other issues with asynchronous programming in a single thread? Is there some part of the big picture I'm not seeing? Am I just plain wrong about all of this? Leave me a comment and let me know what's what.
So as I understand it Node.js has a few things that make it a lot different than traditional server-side development platforms. First off Node code is basically JavaScript. Client code running on the servers? That's weird. Also, I shouldn't have said servers (plural) because Node.js requires a dedicated HTTP server – just one server, and it's got to be HTTP. This is also weird. Node's got some clear advantages though. It's asynchronous and events-based, so theoretically Node applications should never block I/O. Non-blocking I/O might make Node.js a powerful new tool for dealing with giant message queues, but maybe it's got more working against it than just being weird.
I think the guys that say Node.js sucks sound kind of crazy, but they do have a point or three. First and foremost is that Node.js is single threaded; then the detractors have a problem with the similarities Node.js shares with JavaScript; and finally, they say that Node.js cannot possibly back their claim of being blockless.
Addressing the concern about JavaScript is tough for me. I'm not an expert with JavaScript and I don't really know its advantages and disadvantages over other languages. I have read detractors state that JavaScript is a slow language because it is scripted and not compiled. I have read JavaScript proponents explain that it's not the language that is either slower or faster, but the way the code is written, meaning that the skill of the coder supersedes the inherent qualities of the language. Both arguments have merit, and I don't feel qualified to pick a winner.
Most server-side developers are very used to running basically linear processes concurrently in separate threads. This method allows you to run multiple complicated processes at the same time, and if one process fails, the other threads can still remain intact. So having a single thread run one process at a time sounds like it would be really slow. I don't think this is the case with Node.js because it is asynchronous and event based, which is a very different model than one might be used to.
Instead of running one process, waiting for the client to respond and then starting another process, Node.js runs the processes it has the data to run as soon as possible in the order it receives them. Then when the response comes back that's a new process in the queue, and the application just keeps juggling these requests. The overall design is such that Node developers are forced to keep each process very short because – as the detractors are quick to point out – if any one process takes too long it will block the server's CPU which will in effect block the application.
So you can't do long complicated processes like calculating pi with Node.js. Apparently they have workarounds for spinning off really complicated processes if you really need to, but that seems to be outside of the scope of the original plan. I think that where Node shines is in routing a high volume of low-overhead requests. Which means to me that Node.js is great for light messaging applications with a high user volume.
Are there other uses I've missed? Are there other issues with asynchronous programming in a single thread? Is there some part of the big picture I'm not seeing? Am I just plain wrong about all of this? Leave me a comment and let me know what's what.
Friday, March 2, 2012
The Last Responsible Moment
In Lean Software Development: An Agile Toolkit, Mary and Tom Poppendieck describe a counter-intuitive technique for making better decisions:
Paradoxically, it's possible to make better decisions by not deciding. I'm a world class procrastinator, so what's to stop me from reading this as carte blanche? Why do today what I can put off until tomorrow?
Making decisions at the Last Responsible Moment isn't procrastination; it's inspired laziness. It's a solid, fundamental risk avoidance strategy. Decisions made too early in a project are hugely risky. Early decisions often result in work that has to be thrown away. Even worse, those early decisions can have crippling and unavoidable consequences for the entire future of the project.
Early in a project, you should make as few binding decisions as you can get away with. This doesn't mean you stop working, of course-- you adapt to the highly variable nature of software development. Often, having the guts to say "I don't know" is your best decision. Immediately followed by "..but we're working on it."
Jeremy Miller participated in a TDD panel with Mary Poppendieck last year, and he logically connects the dots between the Last Responsible Moment and YAGNI:
I think we should resist our natural tendency to prepare too far in advance. My workshop is chock full of unused tools I thought I might need. Why do I have this air compressor? When was the last time I used my wet/dry vac? Have I ever used that metric socket set? It's a complete waste of money and garage space. Plus all the time I spent agonizing over the selection of these tools I don't use. I've adopted the Last Responsible Moment approach for my workshop. I force myself to only buy tools that I've used before, or tools that I have a very specific need for on a project I'm about to start.
Be prepared. But for tomorrow, not next year. Deciding too late is dangerous, but deciding too early in the rapidly changing world of software development is arguably even more dangerous. Let the principle of Last Responsible Moment be your guide.
Concurrent software development means starting development when only partial requirements are known and developing in short iterations that provide the feedback that causes the system to emerge. Concurrent development makes it possible to delay commitment until the last responsible moment, that is, the moment at which failing to make a decision eliminates an important alternative. If commitments are delayed beyond the last responsible moment, then decisions are made by default, which is generally not a good approach to making decisions.
Paradoxically, it's possible to make better decisions by not deciding. I'm a world class procrastinator, so what's to stop me from reading this as carte blanche? Why do today what I can put off until tomorrow?
Making decisions at the Last Responsible Moment isn't procrastination; it's inspired laziness. It's a solid, fundamental risk avoidance strategy. Decisions made too early in a project are hugely risky. Early decisions often result in work that has to be thrown away. Even worse, those early decisions can have crippling and unavoidable consequences for the entire future of the project.
Early in a project, you should make as few binding decisions as you can get away with. This doesn't mean you stop working, of course-- you adapt to the highly variable nature of software development. Often, having the guts to say "I don't know" is your best decision. Immediately followed by "..but we're working on it."
Jeremy Miller participated in a TDD panel with Mary Poppendieck last year, and he logically connects the dots between the Last Responsible Moment and YAGNI:
The key is to make decisions as late as you can responsibly wait because that is the point at which you have the most information on which to base the decision. In software design it means you forgo creating generalized solutions or class structures until you know that they're justified or necessary.I think there's a natural human tendency to build or buy things in anticipation of future needs, however unlikely. Isn't that the Boy Scout motto-- Be Prepared?
I think we should resist our natural tendency to prepare too far in advance. My workshop is chock full of unused tools I thought I might need. Why do I have this air compressor? When was the last time I used my wet/dry vac? Have I ever used that metric socket set? It's a complete waste of money and garage space. Plus all the time I spent agonizing over the selection of these tools I don't use. I've adopted the Last Responsible Moment approach for my workshop. I force myself to only buy tools that I've used before, or tools that I have a very specific need for on a project I'm about to start.
Be prepared. But for tomorrow, not next year. Deciding too late is dangerous, but deciding too early in the rapidly changing world of software development is arguably even more dangerous. Let the principle of Last Responsible Moment be your guide.
Subscribe to:
Posts (Atom)