Sunday, August 5, 2018

Observations on the current state of software development for Web applications

This article is intended to describe what I've observed over the past ten years on the evolution of programming for Web applications. My subjective perspective has been influenced by having returned to software development after an absence of a few years, such that I'll start off with a brief introduction to explain the context.

Setting the scene 

About 15 years ago, I took some time off from my software (s/w) development career to complete an MBA. At the time it also became very important to devote more attention to raising my younger kids for a while, since their mom needed to focus on a critical period in her own career.

Hence I balanced my part-time studies with parenting, until finishing the MBA in 2007, after which I gradually returned to working as a s/w engineer full-time. Initially this focus on hands-on development was primarily intended to reestablish my programming skills, which had become rusty.

However, in the course of this process I rediscovered my love of programming, which (like many people) I'd previously set aside in favor of management.

At first it was hard to get back into the groove, but eventually I hit my stride.Thus, over time I’ve been able to hone my previous programming skills and learn enough new ones to embark on the development of a new product.

This new venture is happening via a start-up company called “Simulation Magic”, but more on that later. For now I’d like to reflect a bit on what I’ve observed in the s/w world, since returning to “active duty”.

Summarizing my own areas of expertise

Just to give you an idea of where I’m coming from, over the past ten years I’ve found my niche by gradually specializing in the combination of a few specific areas, as follows:

·        Primarily I’ve worked with the Microsoft .Net platform, either to develop and maintain Web applications, or to convert legacy Windows and Unix apps to the Web
·        For code development, I’ve used mostly C#, HTML, CSS, Visual Basic (VB) and JavaScript, but lately I’ve used Angular JS a lot too
·        From the database (DB) side, it’s been mostly SQL Server, with a fair amount of report development via SQL Server Report Server (SSRS)
·        Along the way I’ve also become reasonably adept at using Generics, which are now such an essential part of contemporary object-oriented (OO) development environments.

Meanwhile I’ve also dabbled in the use of some IBM tools, like Web Sphere and DB2, as well as doing some s/w development with Java and Linux-based platforms. Before going back to school to do my MBA, I’d been programming for many years with a variety of languages, from Fortran, to Assembler, to C++.

Over the course of my career, I’ve also worked on the development of diverse s/w products, from embedded firmware, to healthcare management, to banking applications, to process control systems.  My educational background includes several university degrees, with specializations in math, computer science, engineering and finance/accounting.  

At the same time, I’ve also come to enjoy the challenges of “forensic software”, which from my point of view includes anything that pertains to digging deep into a given set of program files. Thus what’s interesting for me in this area is the process of systematically identifying and resolving bugs, as well as refactoring existing code (i.e. as opposed to the type of forensic work that's typically done by security-oriented institutions).    

All of that led me to my latest venture, which is a new s/w product that combines Big Data, Analytics and Simulations. In brief, the idea behind the "big data and analytics" phase is to start by assimilating and describing the diverse input data from various sources. Then the next step is to analyze it, in order to predict the likely outcomes.

Thus the subsequent “simulation” phase is intended to illustrate all possible scenarios, in order to use mathematical forecasting to prescribe ways to optimize those outcomes. Thus the system is intended to thereby both facilitate decision management and mitigate risk.   

Further details on this approach to Big Data, Analytics and Simulations were provided in my previous posts to this blog, such that I won’t elaborate here. However, to me all of this is essentially a way to make use of the diverse skills that I’ve accumulated over the years.

Another factor, though, is that it’s a way to avoid the biggest enemies of typical s/w developer/nerds like me: complacency and boredom.

How developing s/w is analogous to renovating a house

One of my hobbies is renovation, such that I’ve had the opportunity to be involved in the renovation of a few small homes and rental properties over the years.  However, since renovation is strictly a part-time gig for me, this often involves hiring contractors to get the work done.

Thus a valuable rule of thumb for this activity is that I try to avoid asking anyone to do something that I wouldn’t know how to do myself. Of course, sometimes this may just mean ensuring that I do the necessary research first, since there are many aspects of renovation that I either don’t have practical experience with or don’t wish to learn about (e.g. I’m lousy when it comes to plumbing).     

Well, in some ways it’s the same thing when it comes to s/w development: after all, how can you be an effective team manager if you ask people to do stuff that you don’t know how to do yourself? As with home renovations, if you sometimes don’t know how to do a particular programming or database task then that’s okay, since s/w development is so incredibly diverse today.

Game changers

Nevertheless, you can still:

·         Read a relevant book, or Google the topic, so that you can at least talk about it intelligently
·         Admit to your team that you haven’t done this particular thing yourself before and then either learn it on your own, or have the humility to learn it alongside them

By employing this approach with respect to renovation work, I'm able to quickly smell a rat when contractors try to pull a fast one on me, such that I can then unceremoniously get rid of them. Conversely, when someone does a great job, then I can be suitably impressed and express my sincere appreciation for their work. 

In fact, I have great admiration for anybody who is able to display demonstrable expertise and/or craftsmanship in just about anything. So, if there’s one thing that I’ve learned, it’s that I need to shut up and listen to the experts. Hence, while I know enough to be dangerous in a lot of areas, I generally surround myself by people who know much more than I do … and then I gladly take their advice.

Thus the topics that come to mind, when I think of game-changing innovation for Web development, include the following:

·         The evolution of Generics
·         The maturation of Angular JS and its siblings
·         The convergence of Java and C#
·         The ubiquitous application of Artificial Intelligence (AI) to so many contemporary areas of s/w development.

Generics

Over the past few years, the use of Generics has played an increasingly important role in the evolution of object-oriented programming. For example, they make it possible to include anonymous method calls that will only get identified at run time, which is a very powerful feature. Another frequently-used application of Generics is the Lambda function call, which has become very common, particularly whenever we access a database using an object relational mapping (ORM).

Thus the DB calls tend to become more readable and less error-prone when we use Lambdas, since they appear to go directly to the entity that we seek, rather than systematically searching through the records for a match to our given query/conditions. However, I say that they "appear" to do this because this is actually an illusion and can thus be a double-edged sword.

In fact, those lovely and intuitive Lambda functions will ultimately get automatically translated into some more convoluted SQL code, which does indeed search the DB until a match either is or isn’t found.

Hence a trade-off occurs when increased clarity of using Lambdas is offset by the price of auto-generated SQL code, which typically won't be as optimal as that which would be written by an experienced SQL programmer.  Meanwhile - as anyone who has ever tried to debug a Lambda statement knows - stepping through this code can be challenging, as the contents of the Lambda function call will generally get treated as a single block by the debugger. 

Finally, there is the tricky issue of two-pass Lambda calls to deal with. This occurs because the return of the data and the execution of surrounding code loops do not occur in parallel. This is not intuitive and can be confusing, such that this process sometimes requires the use of nested loops

The result of this approach ensures that the data that's fetched from the DB by the Lambda call is actually present when the appropriate iteration of a surrounding code loop is executed. Similarly, the convenience of using anonymous methods is offset by the relative complexity of reading code that does not explicitly name the methods that are called.

So, perhaps the moral of the story is that (despite the appeal of Generics) there is still no free lunch.

Angular JS and its siblings

I’ll restrict my comments to Angular JS here (which I'll refer to simply as Angular), since it’s the most popular brand of its ilk, but this discussion applies equally to other similar tools. Ditto for Java vs C#, which I consider to be functionally equivalent as programming languages, even though one is compiled and the other is interpreted.

Before learning Angular, I’d already become fairly adept at using JavaScript (JS), in combination with old-style VB code.  Hence I was fascinated by the way that JS lets you identify and manipulate specific objects on the client side, once the appropriate handle has been found for a given object.

Since this already seemed magical to me, I was pleasantly surprised when I later learned that Angular takes this a step further, by providing a rich object-oriented environment that includes Generics. Having come from a background that's heavily oriented towards C#, I was amazed at how similar the client-side Angular programming experience is to the analogous server-side C# programming.

On the other hand, I get the sense that new developers who have gone straight to client-side programming (without doing server-side development first) have been saddled with a limited perception of how to optimize their code. It’s a bit like considering the case of C++ programmers who never used Assembler … which means that they may not be aware of the low-level implications of their relatively sophisticated C++ code.

So, when it comes to Angular vs C#, the problem (as I see it) is that there is an increasing tendency to try to do everything on the client side, rather than determining when to use server or client-side code on a case-by-case basis. Again, although Angular seems to magically access the DB (mostly via client-side Lambda function calls), generally this DB access will ultimately be automatically translated into server-side SQL code.      

Now don’t get me wrong – I realize why it’s more fun to do everything via client-side Angular, rather than sometimes resorting to those relatively tedious server-side methods.  However, from an optimization point of view, there is a significant performance hit when the automatically-translated server-side code is generated, instead of using manually-produced code that is optimized by an experienced server-side programmer.

Hence (I’m just saying), when performance is an issue, it’s usually worthwhile to actually do the triage of when we should use server vs client-side code.  After all, database access often seems to be the performance bottleneck in high-volume Web apps, which is one of the main reasons that we use distributed systems.  

Thus I would simply caution multi-disciplinary programming teams to distribute the load among themselves, just as the app itself distributes it among servers.  

Java vs C#

As alluded to above, it seems to me that there has been a convergence of Java and C# over the past ten years or so. Again, I realize that at a low level the interpreted code will behave very differently from compiled code, but from a developer’s perspective I think that this difference may often seem relatively transparent.

Hence my take on this is that Microsoft (MS) has striven to emulate the freedom that Java-based, open-source code provides. Of course, the open-source world still offers developers an unparalleled freedom of choice, when it comes to mixing and matching the tools of their development environment. 

Nevertheless, Microsoft's C#-based .Net development environment offers out-of-the-box access to virtually everything that a typical developer needs, which can be a big advantage in terms of “hitting the ground running” for a given project. Interestingly, the same seems to be true of other modern programming languages that are supported by the .Net environment, such as VB, R, S, Python and so on.

The application of AI to mainstream s/w development

Thus that Segway leads me neatly to the increasing use of AI (and the related programming languages) in everyday s/w development. It seems reasonable to assume that this phenomenon is occurring largely because we’re witnessing a significant rise in the use of Big Data and Analytics for typical Web applications.

This approach makes sense, given that we’re now in a world where our clients are increasingly seeking a competitive edge, such as that which can be obtained through the use of automated trend-analysis. In particular, it’s normal that corporate strategists want to exploit the available customer-based data, in order to determine how to effectively focus their marketing efforts ("like a laser beam") on their targeted audience.

Consumers tend to feel that this may be an invasion of privacy, since it can be somewhat disconcerting to see an obviously targeted ad appear when we’re browsing the Web in a seemingly anonymous mode. Naturally, Web-surfers can take measures to prevent this from happening (e.g. by disabling cookies and so on), but it appears that most people won’t bother.

Conversely, anyone who is selling something generally needs to seek a competitive advantage ... and the use of Analytics seems to be the contemporary ticket for achieving this goal. That’s in part why I’m turning my own attention in that direction, in order to set my sights on the application of AI to everyday s/w development, as described in my earlier blogs.

Personally, I find it amazing that we can use Analytics to analyze Big Data, in order to:
  1. Intuitively describe the situation that's represented by the data, so that we can 
  2. Predict the relevant confidence intervals for likely outcomes and then
  3. Prescribe appropriate strategies, to thereby increase the likelihood of the most desirable results occurring.
From my point of view, all of this is relevant to a plethora of applications, from the optimization of manufacturing processes, to medical diagnoses, to healthcare management, to finance, to sales and marketing.

Hence.while the engineer in me loves the way that we can apply mathematical algorithms to everyday problems, the computer scientist enjoys the cool programs that this entails. Meantime, the businessman in me drools over the potential revenue.

So, going forward I’ll be providing more info on evolution of this product via a related blog that can be found at www.SimulationMagic.com (i.e. it’s currently under construction). Meanwhile I’ll continue to be working primarily in general s/w development for the Web, since that's still my bread and butter.

Nevertheless, I'm looking forward to gradually becoming more involved in this new adventure, by applying AI to the automated support of decision management and the mitigation of risk.