Credits: Infoq

Credits: Infoq

On completing a computer science degree, a large proportion of graduates proceed into industrial jobs in software development. They work in teams inside organisations large and small to produce software that is central to business and in many ways underpins the daily lives of everyone in the developed world.

Many degree programmes provide students with a solid grounding in the theoretical basis of computing, but it is difficult in a university environment to provide training in the types of software engineering techniques and practices that are used in industrial development projects. We often hear how there is a skills shortage in the software industry, and about the apparent gap between what people are taught in university and the “real world”. In this article we will explain how we at Imperial College London have developed a programme that aims to bridge this gap, providing students with relevant skills for industrial software engineering careers. We will also describe how we have tried to focus the course around the tools, techniques and concerns that feature in the everyday life of a professional developer working in a modern team.

Classes in universities are almost always taught by academic researchers, but few academics have personal experience of developing software in an industrial environment. While many academics, particularly computer scientists, do write software as part of their research work, the way in which these development projects are carried out is normally not representative of the way that projects are run in industrial settings. Researchers predominantly work on fairly small software projects that act as prototypes or proofs-of-concept to demonstrate research ideas. As such they do not have the pressures of developing robust software to address a mass market. They may concentrate on adding new features required to further their research, paying less attention to robustness or maintainability. Similarly they do not typically have a large population of users to support, or need support the operation of a system that runs 24/7, as the developers of an online retailer, financial services organisation or telecoms company might.

Academics and postgraduate researchers often work on their own, and so often do not have experience of planning and managing the work of many different contributors to a software project, integrating all of these whilst preserving an overall architecture which supports maintainability, and making regular releases to a customer according to an agreed schedule. Because of this, few academics have occasion to develop practical experience of the project management and quality assurance methods prevalent in modern industrial software development.

Our approach to tackling these issues has been to engage members of the industrial software engineering community to aid in teaching our software engineering curriculum, drawing on their practical experience to guide the content and the delivery of its constituent courses. This has ranged from getting individual pieces of advice on current issues, helping to outline course content, having practitioners come in to give guest lectures or coaching, to – in my own case – joining the staff. We have found that practitioners are generally very happy to help us shape the curriculum for the next generation of software engineers, to give something back to the community, and of course helping with teaching can also be an opportunity to promote their companies if they are recruiting.

Course Content

At Imperial we have a three or four year programme in Computing. Students can study three years for a BEng degree, or study an extra fourth year and receive an MEng degree. The first three years are fundamentally the same for both programmes, but those going on to take the fourth year also do a six month work placement with a company between their third and fourth years. Here we will describe the core modules that we feel constitute the “software engineering” element of the course – although alongside these, students study modules in mathematics, logic, compilers, operating systems and many other aspects of what might be thought of as “computer science”.

First Year

In the first year of the degree programme, we concentrate on basic programming skills. We believe that these are fundamental for all of our students, and these are taught through lecture courses in functional, object-oriented and systems programming, supported by integrated computer-based laboratory exercises. The lab exercises are very important, as it is through these that students get to practise programming, get personalised feedback, and improve.

One problem that we have is that some students come to university with lots of coding experience, sometimes from school, but mostly from self-study and projects undertaken in their own time. Others come never having written a line of code in their lives. We need to support both of these groups in our introductory course – not making the inexperienced coders feel like they are disadvantaged, whilst not boring the more experienced students with material they already know. The main thing we have done to try to level the playing field is to start by teaching Haskell as the first language. This is usually equally unfamiliar to almost all of the new students – even those who have programmed a lot typically have not used this type of language before.

One innovation that has proved very successful is introducing the use of version control right from the very first week. Rather than being “a tool for collaborative projects” used later on, we have made it so that every lab exercise involves cloning a starting point from a git repository, making incremental commits, and then what the students submit for assessment is a git commit hash pointing to the version that they want marked. This makes use of version control something that is completely natural and an everyday activity.

Second Year

In their second year, we aim to teach students how to design and develop larger systems. We want to move on from teaching programming in a particular language, and to look at larger design concerns. In an earlier iteration of this course, the material concentrated on notation, specification languages and catalogues of design patterns. This meant that students would know a range of ways to document and communicate software designs, which were not tied to a particular implementation language. However, when comparing this course content with the design practices predominantly used in industrial projects, we found some mismatches.

Formal specification techniques are used by engineers developing safety-critical and high-precision systems, but these make up only small proportion of industrial teams. Many more are working on doubtless important – but not safety-critical – systems that support business in different types of enterprise, consumer web services, apps and games etc. The use of formal specification techniques amongst these sorts of teams is relatively rare. Also, as agile development methods are now common, design is no longer considered a separate phase of the project, to be completed before coding commences – rather it is a continuous process of decision making and change as the software evolves over many iterations. There are still design concerns at play, but rather than needing a way to specify a software design abstractly up-front, the common case is that team members need ways to discuss and evaluate design ideas when considering how to make changes and add new features to an existing piece of software.

We still give students the vocabulary of design patterns and architectural styles, but with each we look at the problem it is aiming to solve (for example the removal of duplication) and any trade-offs that may apply (for example introduction of coupling caused by the use of inheritance, and how this might affect future changes). We have moved towards grounding the examples in code, accompanied by tests, and cast design changes as evolutions and refactorings affecting various qualities of the codebase that we are working on. By working concretely with code, we have found that students engage more directly with different design concerns, and the effects of the forces as play in the system, than they did when thinking about designs more abstractly. We can use modern IDEs to manipulate code into different structures, use metrics to talk explicitly about code quality, complexity, coupling etc, and the students can learn kinaesthetically by working through problems and producing practical solutions.

Third Year

In their third year, students have a major assignment to work on a project in a group of 5-6, over a period of about 3 months (in parallel with their lecture courses). Each group has a different brief, but all are aiming to build a piece of software that solves a particular problem or provides a certain service for their users. Each group has a customer – either a member of the faculty, or an industrial partner, to guide the product direction. The main aims of this project from an educational point of view are to build the students’ skills in teamwork and collaboration, and to put into practice software engineering techniques that support this kind of development work. To support this, we run in parallel a course on Software Engineering Practice, covering development methods, tools, quality assurance, project and product management techniques, etc.

This has been one of the most difficult courses for us to get right. The main problem is one of relevance. We want the software engineering course to support the group projects, and for the two to be integrated. But, feedback from the students has often been that they felt that the software engineering course was a distraction, and that they would prefer to spend time “just getting on with the project”. This shows that students were not feeling the benefits of the taught software engineering practices in their own projects. We considered two possible reasons for this.

Firstly, the range of projects being carried out by different groups is wide. Some may be developing mobile apps, while others create web applications, desktop software or even command line tools. If we include material in the curriculum about a particular topic that is relevant to some project groups – for example cloud deployment – it may be irrelevant to others. The more content we try to include in the course, the greater the chance that we are asking students to spend time learning a topic that they feel does not affect their project.

The second reason that we think students are not feeling the benefit of taught software engineering practices is that even though these projects are by no means trivial, they are not big or long enough to really feel the pain of maintaining software over a long period of time, integrating many different aspects. We encourage them to set up collaboration tools and procedures to help them work together both in terms of technical code and software management, and also more general project management. At the beginning of the project, these can seem like overhead – especially the time spent setting up tools, which again can seem like time lost from “getting on with the project”. It is only towards the end of the projects, when pressure is on to deliver, that these tools and techniques return rewards on that investment.

Fourth Year

The final part of our four year programme is a course entitled Software Engineering for Industry. The main philosophy behind it has been to give students a view of some of the issues facing industrial software engineers, essentially preparing them for the world of work. As we have iterated on the second and third year courses, we have tried to include more and more industry-relevant content, and this has often meant moving material down from the fourth year course. For example some material on test-driven development that we used to cover in the fourth year is now a core part of the second year, and an introduction to agile methods is now done in the third year to support group projects. While we do not want to be jumping on all the passing trends, this advanced course gives us a vehicle to discuss and distill the current state of practice, and to filter things down into lower years once they become core practices.

One of the main topics that we aim to cover in this course is working effectively with legacy code. A large proportion of practising software engineers spend their working lives making changes to existing codebases, rather than starting from scratch. This is not a bad thing, it is normal. Successful systems evolve and need to be updated as new requirements come in, market conditions change, or other new systems need to be integrated. This is not second class work, but engineers need techniques to work in this way which differ from what they might do if they had free reign to start from a blank slate. When might it be more appropriate to refactor, and when to rewrite?

Such topics are the realm of opinion rather than hard fact. Thus one of our aims in this course is for students to develop their critical thinking, and to voice their own opinions and arguments based on reading around each topic presented. The main part of their week’s work is to research the topic through blogs, articles, papers, videos of conference talks etc and to write a short position statement based on this answering one of the discussion questions. Then we have a discussion class where students briefly present and discuss their findings from their week’s work. To add to the industrial viewpoint, each week we invite a “panel” of industrial experts as guests. We elicit the panel’s views on the topic under discussion, and they bring their own stories, examples and case studies to share. As we develop the course, it feels less like we are delivering content, and more designing an experience through which the students can participate and learn for themselves.

Perspectives on Teaching

Not all material is taught in the same way, and we are continually trying to improve the learning experience. One way that helped to think and talk about this was to consider the different ways that students learn in terms of three perspectives described by Mark Guzdial in his recent book Learner-Centred Design on Computing Education. Guzdial characterises different learning experiences as Transmission – the transferral of knowledge through a one-way medium like a lecture, Apprenticeship – where students focus on developing skills by practising them in exercises, and Developmental – where each student gets individual help with the things that will help them personally to advance, not necessarily aligned with the rest of the class.

In teaching software engineering, we still have quite a lot of transmission (even though there is evidence [http://www.pnas.org/content/111/23/8410] that it is not so effective, tradition is hard to overcome), but we are starting to focus more on apprenticeship models and the deliberate practice of skills, particularly in terms of software development. It is hard to give students frequent one-to-one attention with class sizes of 150 students, and relatively few tutors, but as we encourage more group work and particularly pair programming in student assignments, we find that students are able to coach and learn from each other, getting individual developmental help from their peers. Prof Laurie Williams at NCSU has done a lot of work showing the effectiveness of pairing programming in teaching. [http://collaboration.csc.ncsu.edu/laurie/pair.html]

Making the Learning Experience More Effective

As we strive to improve the content that we teach, and the way that we teach it, a useful approach has been to think about the delivery of ideas as a value stream. If we start with a big list of requirements for what students should learn (a syllabus) and then over the course of a few months, transmit them via lectures, and at the end perform some quality assurance on this learning by giving the students an exam, then we have something that feels very much like a waterfall development process. In software development, the industry has evolved to value fast feedback and frequent delivery of value in small batches. Can we work towards the same goals in iterating on our learning experiences?

One thing we have done along this path is introducing weekly, small assignments, rather than big end-of-term assessments. This encourages students to work at a more sustainable pace across the term, and gives them and their tutors feedback on how well they have understood each concept. For example, in our software design module, we aim to have a targeted practical exercise each week, so that students can practise a particular aspect of design by writing code and tests, and get feedback on their work within a few days. Of course this generates a large load on the tutors to mark and return a large number of assignments in a short cycle. It is tempting to relent, and reduce it to fortnightly, or monthly assignments, but again following the principles we would apply in an agile project, we have tried to use automation to give initial feedback early, and make the work of the human marker easier, so that it can be done more often. We are not there yet, and there is still a lot of work for the tutors to do each week to give good quality feedback, but it feels like we are heading in the right direction.

We still have lots of problems to solve, and the constantly changing state of the software industry means that we will have to constantly update our curriculum to stay relevant, balancing computer science fundamentals (which hopefully do not change that often) with industrial trends and the application of modern tools and techniques. But, as we would if we were running a software project, we hope to continue to inspect and adapt and continuously improve.

Credits: sitepoint

Credits: sitepoint

The Main Principles of the Kanban Methodology

The term Kanban comes from Japan thanks to the Toyota production system, which is well-known in narrow circles. It would be great if everyone knew about the Kanban methodology and its basic principles: lean manufacturing, continuous development, customer orientation, etc. All the principles are described in Taiichi Ohno’s book, Toyota Production System: Beyond Large-Scale Production.

The term Kanban has a verbatim translation. “Kan” means visible or visual and “ban” means a card or board. Cards of the Kanban methodology are used throughout the Toyota plants to keep inventory management lean — no cluttered warehouses, and workshops with sufficient access to parts.

Imagine that your workshop installs Toyota Corolla doors and there is a pack of 10 doors near your workspace to be installed, one after another, onto new cars. When there are only five doors in the pack, you know that it is time to order new doors. Therefore you take a Kanban card, write an order for another 10 doors on it, and bring the card to the workshop that manufactures doors. You are sure that new doors will be manufactured by the time you have used the remaining five doors.

That’s the way it works in Toyota workshops: when you are installing the last door, another pack of 10 doors arrives. You constantly order new doors only when you need them.

Now imagine that this Kanban system works all over the plant. There are no warehouses with spares laying around for weeks or months. All the employees work upon requests and manufacture the only the necessary amount of spares. If there are more or fewer orders, the system will match the changes.

The main idea of Kanban methodology cards is to scale down the amount of work in progress. For example, due to the Kanban methodology, only 10 cards for doors may be given for a whole manufacturing line. It means that only 10 ready-made doors will be on the line at any time during the production loop. Deciding when those doors are ordered is a task for those who install them. Always limited to 10 doors, only the installers know the upcoming needs of the workshop and can place orders with the door manufacturer.

This methodology of lean manufacturing was first introduced at Toyota, but many companies all over the world have adopted it. But these examples refer to manufacturing, not to software engineering.

How Does the Kanban Methodology Work for Software Development?

Let’s start by looking at the differences in project planning between Kanban and other agile methodologies.

The difference between the Kanban methodology and SCRUM is that:

  • There are no time boxes in Kanban for anything (either for tasks, or sprints)
  • The tasks in the Kanban methodology are larger, and there are less of them
  • The period assessments in Kanban are optional, or there are none of them at all
  • There is no “speed of team” in Kanban — only average time for a full implementation is counted

Now look at this list and think: what will remain of the agile methodology, if we remove sprints, increase dimensions and stop counting the speed of the team’s work? Nothing?

How is it even possible to talk about any supervision over development if all the major tools of control are removed? This is, probably, the most important question for me in the Kanban methodology.

Managers always think about control and try to attain it, though they don’t really have it. A manager’s supervision over the development process is a fiction. If a team doesn’t want to work, it will fail a project despite any level of control.

If a team has fun while working and works with total efficiency, then there is no need for control, because it just disturbs the process and increases costs.

For example, a common problem with the SCRUM methodology are higher costs due to discussions, meetings and big losses of time at the joints of the sprints, when at least one day is wasted to complete a sprint and one more day to start another. If a sprint is two weeks, then two days out of two weeks is 20%, which is a heck of a lot. So while using SCRUM methodology, just about 30-40% of the time is wasted on supporting the process itself including daily rallies, sprint retrospectives and so on.

The Kanban development methodology differs from SCRUM with its focus on tasks. The main objective of a team in SCRUM is the successful completion of a sprint. In the Kanban methodology, tasks take first place. There aren’t any sprints and a team works on a task from beginning to end. The deployment is made when it is ready based on the presentation of work done. A team that follows the Kanban methodology should not estimate time to fulfill a task, since there is no sense in it and these estimates are almost always incorrect.

Why should a manager need a time estimate, if he or she believes in the team? The objective of a manager who uses the Kanban methodology is to create a prioritized task pool, and the team’s objective is to fulfill as many items from this pool as possible. That’s it. There is no need for any control measures. All the manager needs to do is add items to the pool or to change their priority. This is the way a Kanban manager runs a project.

The team works from a Kanban board. It may look like this:

Example Kanban board

Columns from left to right on the Kanban board:

  • Goals: This is an optional, but useful, column on the Kanban board. High-level goals of a project may be placed here so everyone on the team knows about and can be regularly reminded of them. Example goals could be “To increase work speed by 20%” or “To add support for Windows 7”.
  • Story Queue: This column deals with the tasks ready to be started. The highest card (which has the most priority) is taken first and its card is moved to the next column.
  • Elaboration & Acceptance: This column and all the others before the “Done” column may vary, based on the workflow of individual teams. Tasks that are under discussion — an uncertain design or code approach that needs to be finalized, for example — may be placed here. When the discussion is finished, it is moved to the next column.
  • Development: The task lives here until the development of the feature is completed. When the task is complete, it is moved to the next column. If the architecture is incorrect or uncertain, it may be moved back to the previous column.
  • Test: The task is in this Kanban column while it is being tested. If there are any issues, it is returned to “Development.” If there are none, then it is moved to the next column.
  • Deployment: Each project has its own deployment. This could mean putting a new version on the server or just committing code to the repository.
  • Done: The card appears in this section of the Kanban board when the item is completely finished and doesn’t need to be worried about anymore.

Top-priority tasks may appear in any column. Planned or not, they are to be performed immediately. A special column may even be created on the Kanban board for these items. In our example picture, it is marked as “Expedite”. One top-priority task may be placed in “Expedite” for the team to start and finish as soon as possible — but only one such task can exist on the Kanban board! If another is created, it should be added to the “Story Queue” until the existing top-priority task is dealt with.

Let’s talk about one of the most important elements of the board. Do you see the numbers under each column on the example board? This is the number of tasks that can be placed simultaneously in each column. The figures are chosen experimentally, but they are usually based on the number of developers in the team — the team’s capacity for work.

If there are eight programmers on the team, you might give the “Development” column a 4. The programmers can only work on four in-development tasks at a time and will have many reasons to communicate and share experiences. If you put a 2 there, they may begin to feel bored and waste too much time with discussions. If you give it an 8, then each programmer will work on his task, but some items will stay on the board too long, while the main aim of the Kanban methodology is to shorten the time from the beginning of a task until its end.

No one can give you an accurate answer on task limits — each team is different. A good place to start is dividing the number of developers in two, and adapting the figures from experience.

By “developers” I not only mean programmers, but other specialists too. QA specialists, for example, are developers for the column “QA,” since testing is their responsibility.

How Teams Benefit from Kanban

What benefits will a team derive from a Kanban methodology with these limitations?

First, decreasing the number of the tasks performed simultaneously will reduce the time it takes to complete each one. There is no need to switch contexts between tasks and keep track of different entities since only necessary actions are taken. There is no need to do sprint planning and 5% workshops because the planning has already been done in the “Story Queue” column. In-depth development of a task starts only when the task is started.

Second, showstoppers are seen immediately. When the QA specialists, for example, can’t handle testing, then they will fill their column and the programmers who are ready with new tasks won’t be able to move them to the “Test” column. What shall be done then? In such a situation it is high time to recall that you are a team and solve the problem. The programmers may help to accomplish one of the testing tasks, and only afterward move a new item to the next column. It will help to carry out both items faster.

Third, the time to complete an average task may be calculated. We can log the dates when a card was added to “Story Queue,” when it was started, and when it was completed. We can calculate average waiting time and average time to completion through these three points. A manager or a product owner may calculate anything he or she wants using this figures.

The Kanban methodology may be described with only three basic rules:

  1. Visualize production:
    1. Divide your work into tasks. Write each of them on a card and put the cards on a wall or board.
    2. Use the columns mentioned to show the position of the task under fulfillment.
  2. Limit WIP (work in progress or work done simultaneously) at every stage of production.
  3. Measure cycle time (average accomplishment time) and improve the process constantly to shorten this time.

There are only three basic rules in the Kanban methodology!

There are nine basic rules in the SCRUM methodology, 13 in the XP methodology, and more than 120 in the classic RUP methodology. Feel the difference.

Credits: Appdevelopermagazine

Credits: Appdevelopermagazine

Application developers are increasingly reliant on open source component parts because pre-fabricated components speed up innovation and save developers the time (and money) of having to write code from scratch.

But with 6.1% of component downloads containing a known security vulnerability it’s inevitable that defective parts will make their way into production – especially with component management practices lagging. Up until recently it’s been difficult for organizations to fully grasp the enormity of what it means to have to work backwards to fix the use of defective, outdated, and risky components in applications.

We sat with Derek Weeks, VP and DevOps Advocate at Sonatype to chat about the prevalence of defective components in the software supply chain and applications, how the cost of rework and bug fixes negatively impact innovation, and what companies can do about it.

ADM: What is a software supply chain?

Software supply chains are just like supply chains in other manufacturing entities used around the world. A typical supply chain has buyers who build parts and make those available to manufacturers through a number of distribution channels. Those manufacturers get those parts and use them to assemble finished goods that they then sell to their customers. In software supply chains, the parallels are very common to traditional supply chains but the suppliers are open source projects that create open source and third-party components. Those components are then made available on the internet through large public warehouses of open source components.

Any software development team around the world that is manufacturing software using these parts as building blocks to assemble their own applications can freely access these warehouses. These components are then assembled through the software development teams into finished goods, which are software applications that all of us either rely on for services or as end products that we’re purchasing.

ADM: Why is software no longer written from scratch?

I think a lot of people who are not familiar with how development has changed in the last decade believe that software is built from scratch, and that there are developers out there who code every single line within an application.

In reality, use of open source and third-party components over the past 10 years has become a commonplace development practice. Developers are sharing their best code by packaging it up into components for other developers to reuse. So, rather than write my own logging framework, web application framework, or encryption functions for an application, I can actually go to the internet and source those for free from developer’s open source projects that have supplied the parts. What this means is that as a developer I don’t need to write from scratch anymore and I can accelerate the pace of developing new applications. The proliferation of open source has added a tremendous new velocity to software development practices around the globe.

ADM: How many open source components are being consumed, and are all parts created equal?

Open source components are being consumed at almost unimaginable volumes today. In the Java realm of software development last year, we saw more than 31 billion download requests happen across a global population of about 10 million Java developers. While Java developers are consuming billions of these component parts, component use is not limited to Java development alone. There are different component formats for different development languages – component formats like npm for JavaScript, NuGet for .NET developers, PyPI packages for Python developers, etc.

Within these billions of components, one of the secrets out there is that not all of these parts are created equal. There are millions of parts available to developers. Of those millions, versions could be as young as one day or as old as 11 years. The average open source project releases about 14 new versions of their component parts per year. Some of those new releases are to make the components higher performing, less buggy, more functional, or more secure.

Last year in the research that we did,we saw that more than six percent of the components that were being downloaded had a known security vulnerability. Across billions of downloads, about 1 in 16 components had a known security defect on the day they were downloaded.

ADM: How are organizations vetting the quality and security of components in their software supply chains today?

The evolution of software development requires the need for more DevOps-native tools that allow developers to automatically evaluate millions of components. An example I like to use is that of a big healthcare company with 2,000 software developers, that’s consuming 16million parts annually. There’s only one person employed at that business to assess whether those parts are good or not, and that person only looks at the software licenses of those components. That one person doesn’t look at the version, how old the components are, or any known security defects. Additionally, that one person alone cannot keep up with annually auditing the 16 million parts being consumed by their developers.

The day-to-day reality inside that organization is that the person in charge of approving components is busy 100%of the time. Even if they could employ 100 more people in that role, the organization could not keep pace by manually evaluating what they are consuming.

ADM: Are existing practices to identify and track defects in open source components keeping pace with development?

The existing practices in the industry today are really more manual than automated and when manual practices are in place, those practices cannot keep up with the volume of activity that is happening across the industry. An average organization consumes about 225,000 components a year in the manufacturing or development of its software applications. For an organization to try and evaluate every one of those components and determine whether it is good or has a known defect is very difficult and time-consuming.

In fact, the volume of consumption has out-paced manual approaches to evaluation and governance of these components for probably six or eight years now. Many companies have chosen to approach automated ways to identify, track and trade, and set policies for which components are acceptable to use in their organization and which are not.

For example, a company I know in the financial services industry has a governance practice in place around defining which components their developers can use. They told me that they had more than 800 components approved across their application development portfolio. However, when we worked with them to analyze how many components were actually used in their portfolio, they found that developers were actively using 13,000 different open-source and third party components.

ADM: Can you share an example of where open source governance practices are not keeping pace?

Part of the discrepancy between the number of components that were approved (800) and the number that were actually in use (13,000) had to do with the approval or governance process that was in place. What developers wanted to immediately know was which components were safe or unsafe to use, or which fit or did not fit with the company’s policy. As a result they were forced to wait anywhere from two to nine weeks for a response from the governance body within their company. When developers want an instant decision and don’t get one they will find a workaround. This workaround led to more than 12,000 components being actively used in development, skirting the approval process.

This workaround led to lower-quality components being used actively throughout the development organization, and that’s something that they’re now asserting more control over while allowing their developers to use the highest-quality components permissible from open-source projects worldwide.

ADM: What lessons can we learn from traditional manufacturing to improve how software is developed?

There are lessons from traditional manufacturing practices, especially high velocity, high volume manufacturing that can be applied to software. Many of these lessons originate from those learned at Toyota through Deming and other manufacturing thought leaders. The key practices that I’ve seen employed through a number of organizations relate to relying on the fewest and highest quality suppliers. The leading organizations are also tracking and tracing use of components across their software supply chains; in a situation where any of them are discovered to be defective, you can immediately locate those components and begin to quickly remediate them within the organization.

In the 2016 State of the Software Supply Chain report, one of the common practices that we saw being applied from traditional manufacturing into software development was the use of a software Bill of Materials. We highlighted how organizations like Exxon and the Mayo Clinic were using a software bill of materials to identify what components they were using in the applications they were developing in-house, as well as applications that they were purchasing from other development organizations. They were using these software bill of materials to determine what component parts were used in those applications because they wanted to understand if any of them had a known defect — and in particular did they have any known security defect.

ADM: If open source software components have so many defects, should we stop using them?

Most organizations that hear how many open source and third-party components they’re consuming and also understand the defect rates, have an initial reaction of believing they need to stop using these components. They get scared and feel that they can’t allow that volume of risk to come into their organization.

However, the reality is software development practices rely so much on these components that most organizations that would make the decision to stop using components would simply have to stop developing software. The use of components has proliferated so much it’s nearly impossible to stop the consumption.

Given that scenario, it’s important that we learn to manage software supply chains, just as other manufacturing organizations have learned to manage their supply chains. Organizations can do this in a high volume, high velocity environment while maintaining quality. If we think about manufacturing organizations around the world, like an Apple, a Ford Motor Company or a Pfizer…they are using a huge number of parts to assemble or produce the goods that they are then delivering to customers.

These companies have figured out how to work with their suppliers and their supply chains to vet the components before they come in the door. They use the highest quality and latest versions of parts in there in order to deliver the best products to their customers. By managing these supply chains they’ve proven they can reduce the cost of producing goods by using the highest quality parts.

ADM: Have organizations quantified the value of better managing their software supply chains?

Absolutely. We have worked with a variety of different organizations that have utilized DevOps-native tools to reduce the number of defective components they use. This has been done by using high quality components and reducing the variety of components across their software supply chain. We’ve seen organizations not only increase or improve the quality of their software application by as much as 50 or 60 percent, but we’ve also seen them improve the productivity of their development organization by as much as 30 to 40 percent in the same time frame.

Organizations can see the same type of results by using the highest quality parts from the start, and as a result those development organizations spend less time and work fixing these defects that may be caught later in the software development lifecycle or even out in production environments.

Organizations that choose to manage their software supply chain from the earliest stages and bring in the highest quality parts can reduce the amount of time, effort, and money that they’re spending to remediate these defects. They then can apply that money towards their innovation budget to continue to differentiate their businesses and make them more competitive.

Credits: Computerweekly

Credits: Computerweekly

 

Software development practices in the enterprise have traditionally focused on delivery high-quality code built on proven platforms. But the web and the emergence of apps, built on web scale infrastructure, are rewriting the rulebook.

In fact, businesses struggle to compete with startups that can somehow maximise the value of the new economy, and are able to undermine traditional business models.

“Over the past 20 years, IT has been set up for efficiency, cost reduction and doing things as safely as possible,” says Benjamin Wootton, co-founder of Contino.

He says companies are now driven by the need to work faster and are becoming more agile in order to improve the customer experience.

“This is applying pressure on IT and how we develop software,” he says.

Whereas IT heads previously implemented heavyweight internal IT processes and used outsourcing to reduce cost and maintain quality, in Wootton’s experience, this style of running IT slowed down IT departments. “DevOps and continuous delivery allow organisations to operate faster, which is what enterprises want to do today,” he says.

But Wootton argues that among the challenges for IT leaders is the fact that big enterprises are risk adverse and tend to stick with a tried and trusted formula, often contrary to contemporary best practices.

Shifting the enterprise mindset

Kingsley Davis, a partner at Underscore Consulting, adds: “You want to deliver quickly and at pace, which means having a clear strategy about what things are not important for the product.”

Technology such as Docker, to enable developers to create code that can run in their own containers, along with the ability to have short feedback loops, helps businesses to adapt more quickly. Such technology and techniques form the basis of the cultural shift that companies of all sizes need to make to enable their developer teams to become more adept at delivering software quickly, says Davis.

“Culture is very easy to instil when there is a small group of people,” he says. “Hiring is key.”

Davis recommends that IT leaders plan in advance, and hire people appropriate to the direction the IT strategy is taking.

Russ Miles, lead engineer at Atomist, believes IT leaders can learn much from the way webscale organisations approach software development. “Organisations of any size have to compete,” he says.

The speed of change is such that IT leaders cannot afford not to adapt their business processes. “People look at what Netflix is doing and the thing to take away is that agile software development will only get you so far,” says Miles. “The software itself needs to be as adaptable as the process.”

What this boils down to, says Miles, is that IT leaders need to figure out how to adapt systems and the work IT departments need to do, to achieve the speed and flexibility required by the business.

A case for smarter analytics

If they cannot meet the needs of the business, business users will go elsewhere, or even develop the systems themselves.

“Business users are driving software development,” says Frank Ketelaars, big data technical leader for Europe at IBM. This is a form of shadow IT, he adds. “They use spreadsheet data warehouses as their own analytics platform.”

Given the need for developers to be productive and create applications quickly, Ketelaars says technologies such as Apache Spark make it possiblefor businesses to develop machine learning capabilities more easily than before.

Also, the availability of deep learning services is pushing the boundaries of analytics in terms of the massive computational intelligence such algorithms can bring to bear on hard-to-solve problems. But with such technological developments comes new challenges.

Ketelaars says the plethora of analytics tools available makes it hard to validate data models. “It is extremely difficult working with the variety of analytics libraries and tools that are available,” he says.

What developers need, says Ketelaars, is a route to analytics services via a common programming interface, giving them the features of an analytics tool within their own applications.

Another challenge facing analytics applications is how to make sense of the data. “Deep learning is here, but one thing that is missing is context,” says Ketelaars. “If I have a picture with two people running after each other, and one has a frisbee, I know this image is about playing.”

Deep learning algorithms can instantly recognise the image of two people running, he says, but adds: “The context changes dramatically if one of them has a chainsaw. This is where context controls what you should do with the image.”

For Ketelaars, understanding context will be a key requirement in applications, to understand the meaning behind the analytics. “You have to start thinking about what data you have to control the behaviour of your application,” he says.

Essentially, applications become smarter, providing users with the information they need based on a deep contextual understanding of what it deems relevant or important.

Improving tooling

Arguably, there is room for improvement among the array of tools, building blocks and techniques that developers use to create software, says Phil Trelford, founding member of #F Foundation. “A general-purpose programming language is a bit like a spanner and we are all trying to build large systems with spanners,” he says. “What I would like to see is precision tools.”

In fact Trelford goes further, saying the industry needs better “meta tools”, in other words software tools to help the developers of programming tools build precision instruments, rather than generic spanners.

While, as the saying goes, “a bad workman blame his tools”, software developers are keen to see improvements in the tooling they use. In part, this helps them cope with the added complexities when coding and operations become one, as in DevOps, says Trelford.

“Personally, I would like to be able to say, ‘I need to make this thing happen’, then make it as quickly as possible,” says Miles. But today, developers need to draw together a lot of threads to create and run applications successfully, and complexity is increasing all the time, he says. “Smart tooling will help them handle the cognitive overhead,” he adds.

Enterprises appear to be following smaller companies in adopting new, more productive ways to code. For instance, WhatsApp, which was developed by a handful of Erlang developers, was sold to Facebook for $19bn, while Walmart recently acquired the F#-based Jet.com e-commerce platform for $3bn.

While procedural programming languages such as Java and C have been the bread and butter of enterprise software development, what has been particularly interesting for software development is the uptake of functional programming in recent years, particularly among companies that need to support large numbers of internet customers.

As Trelford explains: “Apart from Java, which has huge user groups, the biggest programmer user groups in London are those of the functional programming languages. I run the functional London meeting here. We have been meeting here for about six years and we have over 1,000 members. Scala, I believe, has 1,500-2,000 members and Clojure has been growing, as has Erlang.”

Many proponents of these programming languages talk about how little code they need compared with using a procedural language, which makes them attractive for writing code quickly, says Davis.

But among the reasons for the interest in functional programming is reliability. Malcolm Sparks, director and founder of Juxt, says: “One of the issues we still face as developers is how to build really big software systems. The bigger the project, the more likely it is to fail. It is easier to build small software systems.”

Sparks argues that, ideally, software developers should look at architecting systems by integrating many small software components, each of which has been developed to be highly reliable. “We are moving to a world where individual software systems are becoming so critical and so important that we had better build them using the best tools and the best languages, and this is why we are seeing a rise in interest in functional programming,” he says. “Functional programming is a better approach for writing highly reliable systems.”

Changing software development landscape

Among the changes Miles is seeing is that software development is no longer a factory floor to churn out new products. Rather, he believes software development is evolving into a continuous R&D practice.

“Companies that regard software as a driver for them are the ones that will win and one of the pieces of advice I give to company boards is that they should not think of software development as a general problem that we can solve by throwing more people at it,” says Miles. “Think of software development as a place where you might be surprised what comes out.”

Enterprises do not often have the luxury of greenfield development, but as Trelford points out, where enterprises need a new system, there is the opportunity to experiment with the least risk.

Wootton says: “Everyone is always excited about the new greenfield stuff, but there is a real business case with legacy.” Often, the real business case is actually the J2EE or .Net code that has been running for a decade or more and requires a big support team.

“You might do something crazy like services on your mainframe, but it turns out this may be where you get the biggest return on investment,” he says.

“Legacy is a bad thing,” says Sparks. While it is exciting to create new code based on microservices and perhaps functional programming, the biggest challenge faced by corporate IT is often how to handle a growing legacy of old stuff.

“It can be the millstone that drags you to the bottom of the sea,” says Sparks, who urges CIOs to look constantly at what can be decommissioned, and have development teams write new applications. These not only help to move the business forward but, at the same time, enable IT to decommission something else.

Davis believes that the new techniques available to developers, such as reusable microservices, containers, functional programming and continuous delivery, offer enterprises an ideal opportunity to reduce risk and improve reliability.

“It is all about safe, small-scale scalability,” he says. The tools and techniques discussed enable IT departments to avoid the risk of modifying mission-critical applications by augmenting them in a highly controlled way, adds Davis.

Credits: Dzone.com

Credits: Dzone.com

Some of the best software developers I know didn’t start out their careers with any interest in software development.

It may be difficult to believe, but sometimes having a different background — in a completely unrelated field — is a huge benefit when going into the field of software development.

I’m not entirely sure why this is the case (although I have some ideas, of course), but time and time again, I’ve seen software developers with only a few years’ experience, but broad experience in another field, end up surpassing software developers with much more experience.

If you are thinking about becoming a software developer but you’ve been in another, unrelated field for some time, hopefully this chapter will provide you with encouragement and some ideas of how to best make that transition.

The Benefits of Switching Mid-Career

Mid Life Career Change

Most of what I am going to be talking about here is my own speculation, since I started out my career in software development and later transitioned into the role I am in now, rather than starting out in some unrelated field.

However, like I said, I’ve met enough really successful software developers who started out in completely different fields to have at least a rough idea of what makes them so successful.

One huge benefit I’ve observed for people who have switched into software development from another field is that they often bring with them a large swath of people skills and soft skills that are more rare in the software development field.

It’s no secret that software developers sometimes tend to lack these people skills and other soft skills, and that I find them to be extremely valuable (obviously, since I wrote a book teaching them and have pretty much built an entire business around the idea).

I find that those soft skills that may be developed in other professions translate really well into the software development field and have the tendency to move people who possess them ahead of the normal learning curve. Having them may give you a distinct advantage, especially if you worked in a field where soft skills or people skills were highly valued.

I’ve also found that the mindset of success tends to be widely applicable and that if a person is successful in one professional vocation, the chances are they’ll be successful in any vocation they pursue.

You’ll likely find this to be the case if you are currently in another field — even a very distantly related one — when beginning to make the transition.

Finally, I would say that the ability to think outside of the normal constraints that many software developers and highly technical people think within can be a huge advantage, as well.

There is a high tendency for what is called cargo cult programming, where programmers are likely to do things not because they work, but because other developers are doing them and they are seen as best practices. Having an outside perspective can give you the advantage of thinking in a way that is unclouded by preconceived notions and ideas that are a bit pervasive in the programming community.

While brand new software developers without any experience in any vocation may also have this same perspective, they are often more susceptible to falling into the same traps because they lack the depth of experience and confidence in their own thinking that someone with more experience likely possesses.

Again, I don’t know the exact magic formula that seems to make software developers who started in a different background so successful, but those are a few of my ideas.

The Disadvantages

Job Disadvantages

I don’t want to paint an overly rosy picture of switching into software development from another field. It’s certainly not easy, and there are definite disadvantages. It’s also true that you are not guaranteed to be a stellar programmer just because you used to be a nurse.

One huge disadvantage that blindsides many transitioning developers is the complexity and amount of knowledge required to be a computer programmer.

There are plenty of fields where you could learn something in college or even have some on-the-job training, and in a few months, you’d be able to do the job.

I’m not saying that software development is the only difficult field there is or that anyone can do another vocation without training, but software development is many magnitudes more difficult than the average profession.

Yes, that statement may piss some people off, but it’s completely true.

In fact, if you are having a difficult time accepting that statement, you might have a difficult time making the transition because you will likely not be prepared for all you need to learn.

So, it can definitely be a disadvantage to come into this field thinking it’s just like any other field or job that you can learn.

You will have to do a good deal of studying and intentional practice to become even mildly proficient in this field, which is part of the reason for writing this long volume.

Another major disadvantage is, obviously, time.

This can be overcome somewhat by the advantages I listed above, which can accelerate your learning curve, but you are still going to have to play some catch-up if you want to fill the holes in your knowledge caused by a lack of direct experience.

Even if you have only spent three years in the field and are as good as a software developer who has spent 10 years, you are still not going to have seen as many situations and problems as that person (in most cases), so that lack of experience may make some things a bit more difficult.

How to Do It

OK, now that you’ve got some idea of what you may be up against, let’s talk about how to overcome some of these disadvantages and how to be as successful as possible when transitioning mid-career into software development.

Plenty of people have done it. I’ve even received emails from software developers who’ve made the transition late into their fifties, so it’s certainly possible.

Here’s how.

Transition at Your Current JobTransition At Your Current Job

It’s difficult to break into the field of software development. I’ve already spent a good deal of time in previous chapters talking about how to get your first job because it definitely isn’t easy. No one really wants to hire a software developer without prior programming experience.

How, then, do you get that job if your resume says you’ve been an accountant for the last 20 years? Well, one way is to start transitioning into software development from your current job.

Many software developers I know started out in a completely different field and found that they could learn a little programming here and there to help them with their work or to build some kind of tool that would help everyone at their work.

If you are interested in becoming a software developer, you might want to look around in your current work environment and see if you can find places where you could start using your newfound skills.

This is a great way to transition into software development because if you start programming at your job — even if it’s just small projects — you can then put that on your resume.

You may even find that you can create a software development role for yourself within the company you are working for just by automating things or building tools that end up being valuable enough that your current employer will pay you to keep doing what you’re doing.

Start by taking on some of these side projects at work during your own time and then perhaps ask for permission to start transitioning some of these activities into your full-time position.

If you can pull this one off, you may not even need to go out and apply for a programming job. Once you are officially programming at work, you can always find another programming job somewhere else.

Look for a Way to Use Your Existing Background

Another tactic I’ve seen successfully employed is to use your existing background in an unrelated field to give you valuable domain expertise at a software development company who develops software for that unrelated field.

For example, suppose you had 20 years of experience as a nurse and you wanted to get into software development.

Yes, you could learn programming and then try to apply for any software development job that came along.

However, it might be a much better idea to look for software development companies that mainly operate in the healthcare industry or even healthcare companies who might employ software developers. By specifically applying for these kinds of jobs, you’ll be giving yourself a distinct advantage over other applicants who lack the domain expertise you have.

In software development, domain expertise can be enormously valuable because understanding the why and purpose of the software in a particular industry can prevent many errors from being made.

It may be much easier for a software development company to hire a developer with 10 years of software development experience, but someone who knows software development and has 10 or more years of domain expertise is going to be a much rarer find.

I was just talking to a developer who had a genetics background and ended up getting a job with Oracle because his previous career was in genetic and biological chemistry and Oracle was looking for developers to work on a product they were creating that involved genetic research to help cancer treatment centers.

Try to use your existing, seemingly unrelated experience by finding a way to make it related. Just about anyone can do this because software exists in just about every major industry.

Be Willing to Start From the Bottom

Start At The Bottom

Finally, I’d say that if you are switching into software development mid-career, you need to be willing to start at the bottom with the knowledge that your previous work experience will ensure you don’t stay there long.

It can be difficult to make a transition from a high-paying job where you have seniority and perhaps a reputation to being a lowly grunt being paid peanuts, but if you want to switch careers, you are going to have to be willing to do that — at least in the short term.

The software development world is more of a meritocracy than other industries, so it doesn’t really matter how much experience you have or who you know so much as what you can do (although reputation obviously plays an important part).

I’d advise you to plan on starting from the bottom, realizing that most of your skills are not going to carry over, and to be okay with that.

This will help you to avoid the frustrations you might otherwise face if you expect to make a lateral transition into this field.

Like I said, though, if you already have experience in another industry and have achieved success there, many of the soft skills you have developed will be likely to accelerate you through the ranks of software development.

You just have to be patient to begin with.