Hyperlambda, a .Net Core based alternative to Windows Workflow Foundation

At this point in time, it’s painstakingly obvious that there’ll never be an official port of Windows Workflow Foundation to .Net Core. For those wondering why, read its name once more. “Windows” of course is the give away here, and .Net Core is all about portability. DotNet 5.0 is feature complete, and WWF is not on the list. Sure, there exists a community port of the project, but no official Microsoft release of WWF will ever reach .Net Core. Hence, a lot of companies are in need of an alternative as they move to .Net Core for obvious reasons.

If you study WWF, and what it actually does, it’s really just a dynamic rule based engine. Ignoring its visual aspects, and its drag and drop features, that’s all there is to it. Hence, whatever allows you to dynamically “route” between C# snippets of code, and dynamically orchestrates CLR code together, could arguably do its job.

The above description of “dynamically orchestrating CLR code”, just so happens to be exactly what Hyperlambda allows you to do. It’s a simple YAML’ish file format, with XPath capabilities, that doesn’t require recompilation or redeployment of your app in order to change its “rules”. This allows you to dynamically edit your “rules”, almost the same way you’d modify your configuration, or records in your database – Without even having to bring down your app, or even recompile anything in your assemblies.

For instance, one use case might be that you have some sort of Asterisk integration towards FreePBX or some other phone server. As you’re getting phone calls, you might imagine wanting to route these to different internal recipients, depending upon where the call originated from. For instance, a phone call originating from Australia, might be routed to your English speaking marketing department. While a phone call originating from Germany, you’d probably want to route to its German speaking equivalent.

This ruleset would be a tiny Hyperlambda file, with some 5-10 lines of code, and normally not something you’d want to hardcode into your C# code. By putting this into a Hyperlambda file, like some simple if/else or switch/case statement, you could later easily edit this file, if you for some reason decide to add another country to your back office. What would you do if you were opening up offices in China for instance? Recompile a C# assembly? I don’t think so. Still, this type of logic often over time tends to become too complex for a simple configuration setting. Hence you often need something that is 100% dynamic, yet Turing complete, allowing you to change it “on the fly”, without having to recompile and redeploy the app. You need a “dynamic scripting language for .Net Core” – Which actually perfectly describes Hyperlambda.

Advantages with Hyperlambda

Since Hyperlambda is not based upon reflection, it would probably perform at several orders of magnitudes faster than the equivalent Windows Workflow Foundation solution. I have worked with Workflow solutions, where a simple HTTP request into our Workflow engine, required sometimes 50 seconds to simply change a database record, Hyperlambda would easily cut that time in 10, possibly more. Jon Skeet measured reflection a couple of years ago in fact, and found it to be 200-400 times slower than typed delegates, and hence compiled lambda expressions, which the underlying Dependency Injection core of Hyperlambda is built upon. And in fact, my first implementation of Hyperlambda was built upon reflection, so my own experiences echos Skeet’s findings here. Add a single Dictionary lookup on top of a compiled lambda expression, and you basically have the execution speed of Hyperlambda “keywords”. This should easily be 50-100 times faster than Workflow Foundation – Although I can’t (yet) backup my claims with data here.


Hyperlambda is not a programming language you’ll find books for at your local library. In fact, it’s probably the “smallest programming language in existence on the planet today”. This implies I’ll have to do some serious documentation for it, to allow for you to even start working with it. However, it’s not “new tech”, since I have been working with different permutations of it more less since 2013 myself, and tested it in a whole range of different environments. In addition, it’s arguably also simply a YAML’ish type of file format, combined with an XPath’ish type of expression engine, allowing you to rapidly feel at home if these technologies are something you know from before. Besides, I have done my best to make its tooling space rich, with a syntax highlighter and autocompleter implementation, based upon JavaScript in Magic’s frontend. But it would require some initial learning before you’re productive in it. Also, it has no visual “drag and drop” features, like WWF had – Which probably some people would count among its primary features for the record … 😉

Anyways, all in all, if you’re willing to learn something new, and you need a .Net Core based rule engine, I would definitely suggest you have a look at Hyperlambda. If you can’t wait till I document the thing, you can probably start out by looking at the unit tests for the “sub-modules” of Magic, and specifically their Unit Testing suite (298 unit tests++) – Which you can find at its main GitHub repository.

Download Magic and Hyperlambda from here if you want to play around with it …

How to separate the Einsteins from the Morte

For weird reasons, this is my most read Quora answer since I started using the site a year ago. Within the answer, the observant reader can easily identify a problem, which is as follows; “How do I separate the Einstein from the Morte?”

First things first, an Einstein is “expensive in maintenance”. By this, I don’t necessarily mean his salary, but I mean his work ethic. In order to understand why, I must answer the above question first. And the answer to the above question is as follows; “Give the developer an existing legacy project, and tell him to add one feature to it.”

The Morte will happily code away, spending a week, or possibly less, to deliver your feature. The Einstein on the other hand, will spend a month, and as he gives you back your project, it’ll have 10% of its original codebase, it’ll be 10x as fast in runtime, he’ll have eliminated 100+ bugs, and he’ll have your feature done, in addition to having fixed dozens of security holes you didn’t even know you had. As a bonus, the code will be so clean, that comments are arguably superfluous.

Hence, if you can get away with crappy code, it’s cheaper to hire the Morte, or some inexperienced overconfident junior developer, believing he can solve every problem tossed at him, because he was able to implement QuickSort in x86 CISC assembly code in College.

If you want to create a product that in the long term requires less resources to maintain, that is easily transferred to new developers, with few bugs, executing at the speed of light though – You’ll need to hire the Einstein. He might not necessarily demand that much more salary than the Morte, but he’ll refuse to deliver something back to you, which he haven’t (at least) improved 10x. Hence, he becomes more expensive in the short run, but less expensive in the long run. If your idea of software development is to make some few short bucks though, go for the Morte.

It’s impossible to become a brilliant software development, without a touch of code quality OCD

Implementing an aggressive caching strategy with Magic

The more granulated your HTTP REST endpoints are, the easier it becomes to implement aggressive caching, which results in fewer HTTP requests, simpler server-side code, less server load, and generally more scalable and responsive web apps. In the video below I illustrate how to make sure your HTTP endpoints take advantage of the “Cache-Control” HTTP header, and such communicate “max-age” to your frontend, which allows the client to cache the results of your HTTP GET requests for some configurable amount of seconds.

The architecture of your server-side backend has a lot of consequences. Often a lot of developers wants to return “rich graph objects” to their clients, and for libraries such as GraphQL, this capability becomes arguably the main feature. I am here to tell you that even though this process can reduce the number of HTTP requests in the short term, it can also result in the inability to cache your HTTP requests, in addition to making the code that runs on your server unnecessarily complex, consuming large amounts of CPU time – Resulting in that your app becomes less responsive as it acquires more users.

An alternative is to rely upon what the developers behind GraphQL refers to as “waterfall”, retrieving data in a granular fashion, instead of returning “rich” graph objects. This process allows you to implement caching on a “per table” basis. For instance, your “users” table is probably a table with frequent inserts and updates, while your “roles” table probably doesn’t have updates or inserts more than once a month, or maybe even *never* after its initial creation.

If you return a graph object of your “users” containing the roles each user belongs to, this results in (at least) 2 SQL statements being evaluated towards your SQL database. One to select the user, and one to select the roles your user(s) belongs to. If you return more than one user at the time, this might even result in 20+ SQL statements being evaluated, to return a simple list of 10 users, with their associated roles. In addition, doing any amount of caching on a query string level, becomes literally impossible, without risking returning old and invalid data – Hence the result becomes that even though you wanted less HTTP requests, and more scalability, you ended up with more HTTP requests, and less scalability.

If you instead retrieve all roles during startup of your application, you can reduce your “users” endpoint to only return data from your users table, for then to decorate your users roles on the client side – Significantly reducing the server-load, and allowing you to use a much more aggressive caching strategy. Of course, there are situations where you really, really need to return graph objects – But in my experiences, this tends to be overused and abused by inexperienced developers, resulting in slower applications, with less scalability, and burning an unnecessary amount of CPU time as a consequence of wanting to return graph objects.

It seems so simple, creating an AutoMapper association, a DTO mapping from your Entity Framework type, to your View Model, hidden within some Data Repository. However, behind that “simple line of code”, a bunch of potential scalability problems often exists, resulting in that your end product becomes less scalable, even though your intentions was to make it more scalable …

I’ll probably end up creating some sort of Angular HTTP service interceptor, doing this automagically for you – But at least for now, you can watch the above video, to see how easily an extremely aggressive caching strategy can be implemented using Magic. And if you starts looking at your database, I’ll be surprised if you didn’t conclude with that at least some 40-60 percent of your tables so rarely change, that implementing an extremely aggressive caching strategy isn’t easily within reach for you – However, only as long as you don’t become tempted of returning too complex graph objects from your server.

Premature optimisation is the root of all evil – Donald Knuth, the “father” of programming

And returning rich graph objects, to optimise your data strategy, is almost always premature optimisation …

I will double your productivity as a software developer

The internet is full of false promises. I can’t even logon to LinkedIn without getting hammered with headlines such as “get rich by working from home”, or “make a million dollars in a week”, etc. Obviously it’s difficult to separate the gold from the crap here, and probably 98% of these promises are false – But there is that “one bugger” every now and then that actually is able to keep his promises. Ignoring that guy, is probably not wise.

I am not entirely free from sin in regards to “false promises” myself either I must confess. For instance, in my last article, I created an argument, that albeit is true in isolation, never would hold up in “the real world”. The argument was that “you can do with $1 with Magic, what you need 5 million dollars to do without Magic”.

Of course, if you look at the argument in isolation, it’s solid as rock, and impossible to argue against. However, in the “real world” we have the needs to create a frontend, maybe multiple frontends, for different platforms. Magic of course is “frontend agnostic”, and hence won’t help you much here. So the entire frontend parts still remains, even though you can wave your Magic wand, and create a backend in 1 second. Other problems Magic doesn’t solve (completely), are integrating with other systems. Even though CRUD is a large part of your problem, it is far from your entire problem.

Though all in all, I feel confident in saying that I’ll make you (at least) twice as productive with Magic, as you are without Magic. The reasons for this, is because in addition to “magically creating your CRUD backend”, Magic also results in a “standard” for your Web APIs. This standard is easily extended upon, allowing you to produce your frontend parts also much faster. If you know the URL for some Web API HTTP REST endpoint, and you know which fields it returns, you can deduct the arguments the backend requires. Needless to say, but this allows you to create your frontends also much faster than if you had to lookup every single API endpoint in its documentation, and create a service layer, a data grid, etc, for every single endpoint in your backend. In fact, creating generalised solutions for your particular frontend needs, is ridiculously simply if your backend is Magic. And even the parts you need to create C# code for, you can still massively benefit from creating an intersected Hyperlambda layer for, to dynamically turn on/off caching, logging, changing authorisation needs, etc – As you need. This makes your change requirements much simpler to implement, compared to having everything in a statically compiled CLR assembly.

In addition Magic solves a whole range of additional problems, such as securely storing your passwords in your database, authentication, authorisation, etc. Magic is more than “just CRUD” – It is an idea, and the idea is productivity, productivity and productivity. Will I automatically create HTTP service layer code wrapping your endpoints in the future? Yup, probably. Will I create the means to declaratively inject HTTP invocations, to integrate your endpoints with other systems? Yup, probably. However, I want to sell the things I have already, and the things I have already have the potential to make you 2x as productive as you are today. This of course translates into no more overtime. No more never seeing your children, because of having to work weekends, instead of going to Disneyland with your family. Etc, etc, etc – I am pretty certain you can see the value proposition here if you try …

Would you still have to create code in C#? – Yup! I can pretty much guarantee you that! Would you be able to use Magic for every single table in your system? Nope! I can pretty much guarantee you that – Or I could guarantee you that doing it would probably not be wise. So even though Magic is obviously Magical (pun!), it still needs you to wave your wand. Though I feel so confident in it, that I will give you the following guarantee.

Unless you become at least twice as productive, I will return you your money, within 90 days of purchasing a license

Did you purchase Magic? Do you feel I couldn’t live up to my promise? Send me an email using the form below, and I’ll give you your money back! And if you still haven’t purchased Magic, you can do so from here.

Magic, 2.5x faster than Python’s Django and 5.5x faster than PHP’s Laravel

I was asked how much Magic scales, and how fast it is, compared to other popular solutions out there – And this question intrigued me to such an extent I had to find the answer for it myself. Since there are a whole range of existing performance measurements out there comparing .Net Core to PHP’s Laravel and Python’s Django, I could get away with simply comparing a Magic solution to a “pure” C# and .Net Core Web API, for then to simply extrapolate my numbers unto existing graphs. Maybe you think that this was “cheating”, but since Magic is all about doing less work, and get more results – I kind of felt it would be in the “Magic spirit” to avoid repeating things you could easily find out through a simple Google search.

My conclusions was that Magic is roughly 33% slower than a pure .Net Core controller endpoint, ignoring the fact that Magic has 10x the number of features as its “pure” .Net Core equivalent. Since a pure .Net Core solution is between 3 and 8 times as fast as its Django and Laravel equivalent, this puts Magic into the league as 2.5 times and 5.5 times as fast as their Python and Laravel equivalents. Read the performance article where I got these numbers here. In the video below you can see how I accomplished these numbers, what code I was executing, and how I did the measurement – Such that you can reproduce it for yourself, in case you doubt me.

Conclusion – Magic is between 2.5x and 5.5x faster than Django and Laravel

As a final note, I want to emphasise that the “pure” .Net Core solution did not support paging, filtering, rich querying, or any of the added features the Magic solution gives you out of the box. Hence, the comparison isn’t really a just comparison without mentioning this simple fact. I could of course have pulled in OData, at which point my pure .Net Core solution would also have ended up with query capabilities. I suspect this would have resulted in that Magic would have significantly outperformed the pure .Net Core solution, probably by several folds – But these are my assumptions, and should be taken with a grain of salt, until proven to be correct or incorrect.

As an additional note, I must also say that even though Magic obviously is really, really fast – Magic’s most important feature is not in speed of execution – It is in speed of development. It took me about 30 minutes to wrap up the code for a really simply .Net Controller HTTP GET endpoint. It took me about 1 second to create a much richer and more flexible HTTP GET endpoint in Magic.

Hence, regardless of how you compare Magic to a manual solution, where code has to be written, it becomes an unfair comparison – Simply because with Magic the whole idea is to completely avoid the creation of code. Something I have illustrated previously in one of my videos where I wrap a Sugar CRM database with 222 table, creating 888 HTTP REST endpoints, by simply clicking a button.

Let me put this into perspectives. It took me 30 minutes to wrap up a simple HTTP GET endpoint in C#. If I was to add filtering for it, paging, and query capabilities – It would probably require me (at least) 3x the amount of time. Extrapolating 1.5 hours of development into 888 HTTP endpoints, becomes 1332 hours of software development. 1332 hours divided by 8 becomes 166.5 days of actual development. 166.5 divided by 5 days (working days per week) becomes 33.3 weeks of development. This translates into 7.9 months of development – Ignoring vacations and such. Hence, one man would have to work for roughly 8 months to produce what I did in one second, by clicking a button, and my computer spent 40 seconds delivering – Assuming we can extrapolate 1.5 hours into 888 HTTP REST endpoints. If we were to take this amount of time literally, and translate it into costs, this results in that creating code yourself becomes for this particular use case 4.795.200 times as expensive. Simply because 1332 hours becomes 4.795.200 seconds, and it took me “1 second of man hours” to create Magic’s 888 HTTP REST endpoints.

Of course the above is arguably “China math”, and there are many additional things to consider in a real solution, skewing the numbers towards one or the other directions. For instance, what about maintenance? But if we are to take the numbers literally, you will need roughly 5 million dollars to achieve the same thing manually coding, as you can achieve with $1 and Magic.

1 dollar with Magic brings you the same as 5 MILLION dollars without Magic

Yet again, take the above numbers with a grain of salt, since there are a lot of other factors you need to consider when choosing how to implement your solution. But the above are interesting numbers, and arguably impossible to prove “wrong”, although yet again I want to emphasise that they are “China math”

But that Magic saves you costs, resources, and therefor money – Is beyond any doubt for those with eyes to see for themselves. Now we also know that Magic results in faster end products, at least compared to everything that can compare itself towards Magic.


The power of ZERO

I am not sure if this is a true story, but it’s really really good, so I’ll just assume it’s the truth, and convey it anyway – Because it contains lessons that I think are important.

This happened after World War II, sometimes in the 1950s or something. An American car manufacturer wanted to offshore the creation of some of their components. They had done the math on it, and found that they could save a lot of money by having some Japanese subcontractor create a specific part for their cars. Naturally sceptic, since this was the first time something like this was attempted, they wrote an extremely detailed specification to their Japanese subcontractor, to make sure everything was taken care of. At the bottom of the specification it specified a fault tolerance of 2%, implying they had a margin for error that was maximum 2%.

6 months later, and the Japanese subcontractor was finished with the delivery, and it arrived at some port in America with a boat. The American company was eager to see the results, so they sent a representative down to the harbour to inspect the results. In the shipment, there was one huge package, and a tiny package next to it. The guy who was sent to collect the parts scratched his head, and didn’t understand why there was a smaller package next to the big one, until he read the letter the Japanese car manufacturer had sent together with the delivery. It read …

“We have no idea why you want to have 2% errors, but you can find your 2% errors in the smaller package”

For the Japanese subcontractor, simply the idea of delivering something that wasn’t 100% perfect, was so incomprehensible, that they didn’t even have a vocabulary or mindset that allowed for fault tolerance. Their fault tolerance was always ZERO, period!

We need more people to think like this, especially in the Software Industry …

Simple is always better

This is your software. If you click the button, it solves your problems

I once read a story. It was a story about a company that was hired by Apple to create Apple’s CD burner software. I am not sure if it’s true or not, but I like it, so I will convey it to you anyway.

A small company in Silicon Valley had been given the task of creating Apple’s CD burner software. They were going to meat Steve Jobs, so they were so enthused about this project, they were about to burst of enthusiasm. They prepared like you wouldn’t believe it. 3 months of writing specifications, use cases, and thinking about every possible little thing they could imagine the software would need. The software had all the buttons you could imagine, for every possible task in this world, and they had hundreds of pages of documentation explaining their rational.

When the day for their meeting with Steve came, they headed up to Apple’s HQ, and brought all their documents and presentations with them. Steve walks into the room and says “Hi, I’m Steve”. Then he proceeded to the whiteboard and drew a big square on the board. He said “this is the software”. Then he drew a smaller square in the middle of the software as he said “This is its only button”. It said “Start” or something I think. He finished off with saying “Good luck”, before he left the room. After a week of being angry for being largely ignored by their hero, these software developers realised Steve was right. No need to go “nuclear” on features and buttons. The software didn’t need more than one button, and everything beyond that single button was arguably “bloat”. And their conclusions was as follows.

There are no other sane ways of creating software

There is no shame in asking for help

I don’t cut my own hair. Neither do I build my own cars. Instead I pay for these services with money I earn doing my job, which is software architecture and development. This creates a symbiotic relationship, where I can do what I am best at, and everybody else can do what they are best at – Win win!

Unless your core value as a company is to create software, creating your own software is like having your hair dresser building his own car. Everybody can see the madness in this analogy (puuh!), but few can see the madness in having some Acme, Inc. Financial or Medical company building their own software for some weird reasons. If your core business model is insurance, chances are you’re probably not going to be able to take software development seriously – And you shouldn’t either in fact. You should hire somebody else to take it seriously on your behalf. Somebody that has already taken it seriously for decades. Doing things with your “left hand” results in “left hand results”. For a company where software development is a secondary function, software will always be created with “the left hand”. Since your business depends upon software, the same way you depend upon your car, this results in weird situations, where you are no longer able to sustain your company’s operations, because you have built your company on top of “left hand products”.

There is no shame in asking for help

Identifying your Pearls

What is the real value of your company?

If your software is legacy garbage, what is your company’s true value then? I don’t really have to say this out loud to a seasoned manager, but it’s the relationships you have with your customers, combined with your database. The experiences your clients and customers have with you, combined with the information you have about these clients, is your company’s real value. Your database is the reasons why your key account manager can call up “John Doe” and ask him how his BBQ last Saturday was, and if he’s interested in purchasing your latest product, that outperforms the previous version by 1.8 times on all metrics – Closing the sale due to his existing relationship with Mr. Doe, becomes almost as the most natural thing in this world. Any person having any kind of experience with sales can easily agree with this.

However, if your existing software systems needs 30 minutes of finding Mr. Doe’s last activities, and the last phone conversation your key account manager had with him – Then your software system becomes an anchor that drags you down, instead of lifting you up. This implies that if you are to completely change your existing software, you must change it in such a way that you can still leverage your existing asset: Your database. Hence your next generation of software, must be able to bring the lessons from the previous generation of software with it, in order to provide value to your company, while still being fresh, modern, and blistering fast – Following all the modern best practices in regards to UX, security, and scalability.

Luckily, your database contains what we software developers refers to as “meta information”. This information allows us software developers to gain knowledge about the structure of your data. This structure can then be used to automatically recreate your software, and upgrade it according to modern standards, getting rid of all the legacy garbage you’ve been dragging around for a decade or more. Basically, this meta information allows us to recreate your backend software system, literally in seconds. Watch the following video to understand how, where I take an existing CRM system, that has been maintained for more than a decade, and arguably port it to a modern platform, getting rid of all legacy garbage in the process, and I am able to do it in 40 seconds!

I want to emphasise, that the above video demonstrating literally doing some 40-60 percent of the job of recreating your entire software system, and it is doing that job in 40 seconds. This allows us to create an entirely new software system, based upon your existing data, and its structure, and simply “apply” an entirely new software backend to it. New software that is highly optimized, extremely scalable, and super secure. Software that is created for the future, and not your past.

When asked how to build a house, others will start building. I will spend 10 years thinking about how I can build a house in 10 seconds. Then when I have succeeded, I will build thousands of houses in one hour.

The above process is unique for something I refer to as “Magic”, which is a proprietary tool, I have spent more than a decade of researching and building. Paradoxically, as I created it, I had to throw away 3 “legacy versions” of it myself, which wasn’t good enough for different reasons. Hence, I do as a preach – Few can object to that simple fact. Now it’s your turn to get rid of your old garbage, and upgrade your software infrastructure, and ready yourself for your future – Getting rid of the “past ghosts from previous times” in the process. Contact me below if you’d like to hear more about this. Or check out Magic for yourself if you’re curious, and technically savvy enough to understand the process.

The “precious” legacy codebase

You don’t have to be destroyed as your codebase nears the end of its life

Most managers have an unrealistic idea of the value of their own codebase. This is a natural consequence of having maintained the code possibly for as long as a decade. “If 10 people spent 10 years creating the stuff, obviously it must be worth a lot, right?” I am here to tell you that such conclusions are fundamentally wrong. In fact, the more developers have maintained your codebase, and the longer you have maintained it, the less your codebase is worth! The reasons are obvious for anyone having experiences from software development spanning more than a handful of projects; Spaghetti!

The more people touching your codebase, the less consistent it becomes, and the more difficult it becomes to maintain. And the more difficult it becomes to maintain, the more bloat it automatically gets. The more bloat your code gets, the more difficult it becomes to maintain, and so on. This becomes a vicious cycle, where the codebase quality spirals downwards, to the point where its value becomes net negative.

If you don’t believe me, look at your own codebase. My guess is it contains dozens of libraries that are no longer actively maintained, or that used to be “best practices” a decade ago. jQuery, Durandal or .Net Framework for that matter? If your codebase contains any of these elements within, it’s destined for the scrapyard. Sorry for giving you a painful message here, but it’s the sad truth. Code doesn’t have value, in fact it has no value at all. Code has never had value, and it will never get value either. What has value, is your software suite’s ability to solve problems. As time passes, your codebase ability to solve problems becomes less and less, du to legacy code, bloated repositories, and obsolete libraries, to the point where it can no longer sustain its life. As it does, it is crucial that your codebase “spins of a child” (new codebase) to continue your ability to keep solving the problems your original code was architected to solve in the first place.

Facts are, code has a lifespan, just like animals and humans do. And when your codebase reaches the end of its lifespan, you can either accept it, throw it away, and do the big rewrite – Or you can choose to sink to the bottom with it, like the Captain of the Titanic. Everyone in the software industry having more than a decade of experience with creating software can easily echo my experiences here. Whether they have the courage to actually tell you though, is a different story. Don’t hire “yes people”, hire people that will guide you to the truth – Even when the truth is painful. This was the recipe followed by literally all great entrepreneurs, ranging from Bill Gates to Henry Ford. If you’re a manager and your company is maintaining a mountain of garbage, do me a favour. Go to the bathroom and tell yourself the following.

My code is garbage. I know this, and I must do something about it.

Accepting the truth, is the first step to recovery!