Hyperlambda, a .Net Core based alternative to Windows Workflow Foundation

At this point in time, it’s painstakingly obvious that there’ll never be an official port of Windows Workflow Foundation to .Net Core. For those wondering why, read its name once more. “Windows” of course is the give away here, and .Net Core is all about portability. DotNet 5.0 is feature complete, and WWF is not on the list. Sure, there exists a community port of the project, but no official Microsoft release of WWF will ever reach .Net Core. Hence, a lot of companies are in need of an alternative as they move to .Net Core for obvious reasons.

If you study WWF, and what it actually does, it’s really just a dynamic rule based engine. Ignoring its visual aspects, and its drag and drop features, that’s all there is to it. Hence, whatever allows you to dynamically “route” between C# snippets of code, and dynamically orchestrates CLR code together, could arguably do its job.

The above description of “dynamically orchestrating CLR code”, just so happens to be exactly what Hyperlambda allows you to do. It’s a simple YAML’ish file format, with XPath capabilities, that doesn’t require recompilation or redeployment of your app in order to change its “rules”. This allows you to dynamically edit your “rules”, almost the same way you’d modify your configuration, or records in your database – Without even having to bring down your app, or even recompile anything in your assemblies.

For instance, one use case might be that you have some sort of Asterisk integration towards FreePBX or some other phone server. As you’re getting phone calls, you might imagine wanting to route these to different internal recipients, depending upon where the call originated from. For instance, a phone call originating from Australia, might be routed to your English speaking marketing department. While a phone call originating from Germany, you’d probably want to route to its German speaking equivalent.

This ruleset would be a tiny Hyperlambda file, with some 5-10 lines of code, and normally not something you’d want to hardcode into your C# code. By putting this into a Hyperlambda file, like some simple if/else or switch/case statement, you could later easily edit this file, if you for some reason decide to add another country to your back office. What would you do if you were opening up offices in China for instance? Recompile a C# assembly? I don’t think so. Still, this type of logic often over time tends to become too complex for a simple configuration setting. Hence you often need something that is 100% dynamic, yet Turing complete, allowing you to change it “on the fly”, without having to recompile and redeploy the app. You need a “dynamic scripting language for .Net Core” – Which actually perfectly describes Hyperlambda.

Advantages with Hyperlambda

Since Hyperlambda is not based upon reflection, it would probably perform at several orders of magnitudes faster than the equivalent Windows Workflow Foundation solution. I have worked with Workflow solutions, where a simple HTTP request into our Workflow engine, required sometimes 50 seconds to simply change a database record, Hyperlambda would easily cut that time in 10, possibly more. Jon Skeet measured reflection a couple of years ago in fact, and found it to be 200-400 times slower than typed delegates, and hence compiled lambda expressions, which the underlying Dependency Injection core of Hyperlambda is built upon. And in fact, my first implementation of Hyperlambda was built upon reflection, so my own experiences echos Skeet’s findings here. Add a single Dictionary lookup on top of a compiled lambda expression, and you basically have the execution speed of Hyperlambda “keywords”. This should easily be 50-100 times faster than Workflow Foundation – Although I can’t (yet) backup my claims with data here.

Disadvantages

Hyperlambda is not a programming language you’ll find books for at your local library. In fact, it’s probably the “smallest programming language in existence on the planet today”. This implies I’ll have to do some serious documentation for it, to allow for you to even start working with it. However, it’s not “new tech”, since I have been working with different permutations of it more less since 2013 myself, and tested it in a whole range of different environments. In addition, it’s arguably also simply a YAML’ish type of file format, combined with an XPath’ish type of expression engine, allowing you to rapidly feel at home if these technologies are something you know from before. Besides, I have done my best to make its tooling space rich, with a syntax highlighter and autocompleter implementation, based upon JavaScript in Magic’s frontend. But it would require some initial learning before you’re productive in it. Also, it has no visual “drag and drop” features, like WWF had – Which probably some people would count among its primary features for the record … 😉

Anyways, all in all, if you’re willing to learn something new, and you need a .Net Core based rule engine, I would definitely suggest you have a look at Hyperlambda. If you can’t wait till I document the thing, you can probably start out by looking at the unit tests for the “sub-modules” of Magic, and specifically their Unit Testing suite (298 unit tests++) – Which you can find at its main GitHub repository.

Download Magic and Hyperlambda from here if you want to play around with it …

Implementing an aggressive caching strategy with Magic

The more granulated your HTTP REST endpoints are, the easier it becomes to implement aggressive caching, which results in fewer HTTP requests, simpler server-side code, less server load, and generally more scalable and responsive web apps. In the video below I illustrate how to make sure your HTTP endpoints take advantage of the “Cache-Control” HTTP header, and such communicate “max-age” to your frontend, which allows the client to cache the results of your HTTP GET requests for some configurable amount of seconds.

The architecture of your server-side backend has a lot of consequences. Often a lot of developers wants to return “rich graph objects” to their clients, and for libraries such as GraphQL, this capability becomes arguably the main feature. I am here to tell you that even though this process can reduce the number of HTTP requests in the short term, it can also result in the inability to cache your HTTP requests, in addition to making the code that runs on your server unnecessarily complex, consuming large amounts of CPU time – Resulting in that your app becomes less responsive as it acquires more users.

An alternative is to rely upon what the developers behind GraphQL refers to as “waterfall”, retrieving data in a granular fashion, instead of returning “rich” graph objects. This process allows you to implement caching on a “per table” basis. For instance, your “users” table is probably a table with frequent inserts and updates, while your “roles” table probably doesn’t have updates or inserts more than once a month, or maybe even *never* after its initial creation.

If you return a graph object of your “users” containing the roles each user belongs to, this results in (at least) 2 SQL statements being evaluated towards your SQL database. One to select the user, and one to select the roles your user(s) belongs to. If you return more than one user at the time, this might even result in 20+ SQL statements being evaluated, to return a simple list of 10 users, with their associated roles. In addition, doing any amount of caching on a query string level, becomes literally impossible, without risking returning old and invalid data – Hence the result becomes that even though you wanted less HTTP requests, and more scalability, you ended up with more HTTP requests, and less scalability.

If you instead retrieve all roles during startup of your application, you can reduce your “users” endpoint to only return data from your users table, for then to decorate your users roles on the client side – Significantly reducing the server-load, and allowing you to use a much more aggressive caching strategy. Of course, there are situations where you really, really need to return graph objects – But in my experiences, this tends to be overused and abused by inexperienced developers, resulting in slower applications, with less scalability, and burning an unnecessary amount of CPU time as a consequence of wanting to return graph objects.

It seems so simple, creating an AutoMapper association, a DTO mapping from your Entity Framework type, to your View Model, hidden within some Data Repository. However, behind that “simple line of code”, a bunch of potential scalability problems often exists, resulting in that your end product becomes less scalable, even though your intentions was to make it more scalable …

I’ll probably end up creating some sort of Angular HTTP service interceptor, doing this automagically for you – But at least for now, you can watch the above video, to see how easily an extremely aggressive caching strategy can be implemented using Magic. And if you starts looking at your database, I’ll be surprised if you didn’t conclude with that at least some 40-60 percent of your tables so rarely change, that implementing an extremely aggressive caching strategy isn’t easily within reach for you – However, only as long as you don’t become tempted of returning too complex graph objects from your server.

Premature optimisation is the root of all evil – Donald Knuth, the “father” of programming

And returning rich graph objects, to optimise your data strategy, is almost always premature optimisation …

I will double your productivity as a software developer

The internet is full of false promises. I can’t even logon to LinkedIn without getting hammered with headlines such as “get rich by working from home”, or “make a million dollars in a week”, etc. Obviously it’s difficult to separate the gold from the crap here, and probably 98% of these promises are false – But there is that “one bugger” every now and then that actually is able to keep his promises. Ignoring that guy, is probably not wise.

I am not entirely free from sin in regards to “false promises” myself either I must confess. For instance, in my last article, I created an argument, that albeit is true in isolation, never would hold up in “the real world”. The argument was that “you can do with $1 with Magic, what you need 5 million dollars to do without Magic”.

Of course, if you look at the argument in isolation, it’s solid as rock, and impossible to argue against. However, in the “real world” we have the needs to create a frontend, maybe multiple frontends, for different platforms. Magic of course is “frontend agnostic”, and hence won’t help you much here. So the entire frontend parts still remains, even though you can wave your Magic wand, and create a backend in 1 second. Other problems Magic doesn’t solve (completely), are integrating with other systems. Even though CRUD is a large part of your problem, it is far from your entire problem.

Though all in all, I feel confident in saying that I’ll make you (at least) twice as productive with Magic, as you are without Magic. The reasons for this, is because in addition to “magically creating your CRUD backend”, Magic also results in a “standard” for your Web APIs. This standard is easily extended upon, allowing you to produce your frontend parts also much faster. If you know the URL for some Web API HTTP REST endpoint, and you know which fields it returns, you can deduct the arguments the backend requires. Needless to say, but this allows you to create your frontends also much faster than if you had to lookup every single API endpoint in its documentation, and create a service layer, a data grid, etc, for every single endpoint in your backend. In fact, creating generalised solutions for your particular frontend needs, is ridiculously simply if your backend is Magic. And even the parts you need to create C# code for, you can still massively benefit from creating an intersected Hyperlambda layer for, to dynamically turn on/off caching, logging, changing authorisation needs, etc – As you need. This makes your change requirements much simpler to implement, compared to having everything in a statically compiled CLR assembly.

In addition Magic solves a whole range of additional problems, such as securely storing your passwords in your database, authentication, authorisation, etc. Magic is more than “just CRUD” – It is an idea, and the idea is productivity, productivity and productivity. Will I automatically create HTTP service layer code wrapping your endpoints in the future? Yup, probably. Will I create the means to declaratively inject HTTP invocations, to integrate your endpoints with other systems? Yup, probably. However, I want to sell the things I have already, and the things I have already have the potential to make you 2x as productive as you are today. This of course translates into no more overtime. No more never seeing your children, because of having to work weekends, instead of going to Disneyland with your family. Etc, etc, etc – I am pretty certain you can see the value proposition here if you try …

Would you still have to create code in C#? – Yup! I can pretty much guarantee you that! Would you be able to use Magic for every single table in your system? Nope! I can pretty much guarantee you that – Or I could guarantee you that doing it would probably not be wise. So even though Magic is obviously Magical (pun!), it still needs you to wave your wand. Though I feel so confident in it, that I will give you the following guarantee.

Unless you become at least twice as productive, I will return you your money, within 90 days of purchasing a license

Did you purchase Magic? Do you feel I couldn’t live up to my promise? Send me an email using the form below, and I’ll give you your money back! And if you still haven’t purchased Magic, you can do so from here.

Magic, 2.5x faster than Python’s Django and 5.5x faster than PHP’s Laravel

I was asked how much Magic scales, and how fast it is, compared to other popular solutions out there – And this question intrigued me to such an extent I had to find the answer for it myself. Since there are a whole range of existing performance measurements out there comparing .Net Core to PHP’s Laravel and Python’s Django, I could get away with simply comparing a Magic solution to a “pure” C# and .Net Core Web API, for then to simply extrapolate my numbers unto existing graphs. Maybe you think that this was “cheating”, but since Magic is all about doing less work, and get more results – I kind of felt it would be in the “Magic spirit” to avoid repeating things you could easily find out through a simple Google search.

My conclusions was that Magic is roughly 33% slower than a pure .Net Core controller endpoint, ignoring the fact that Magic has 10x the number of features as its “pure” .Net Core equivalent. Since a pure .Net Core solution is between 3 and 8 times as fast as its Django and Laravel equivalent, this puts Magic into the league as 2.5 times and 5.5 times as fast as their Python and Laravel equivalents. Read the performance article where I got these numbers here. In the video below you can see how I accomplished these numbers, what code I was executing, and how I did the measurement – Such that you can reproduce it for yourself, in case you doubt me.

Conclusion – Magic is between 2.5x and 5.5x faster than Django and Laravel

As a final note, I want to emphasise that the “pure” .Net Core solution did not support paging, filtering, rich querying, or any of the added features the Magic solution gives you out of the box. Hence, the comparison isn’t really a just comparison without mentioning this simple fact. I could of course have pulled in OData, at which point my pure .Net Core solution would also have ended up with query capabilities. I suspect this would have resulted in that Magic would have significantly outperformed the pure .Net Core solution, probably by several folds – But these are my assumptions, and should be taken with a grain of salt, until proven to be correct or incorrect.

As an additional note, I must also say that even though Magic obviously is really, really fast – Magic’s most important feature is not in speed of execution – It is in speed of development. It took me about 30 minutes to wrap up the code for a really simply .Net Controller HTTP GET endpoint. It took me about 1 second to create a much richer and more flexible HTTP GET endpoint in Magic.

Hence, regardless of how you compare Magic to a manual solution, where code has to be written, it becomes an unfair comparison – Simply because with Magic the whole idea is to completely avoid the creation of code. Something I have illustrated previously in one of my videos where I wrap a Sugar CRM database with 222 table, creating 888 HTTP REST endpoints, by simply clicking a button.

Let me put this into perspectives. It took me 30 minutes to wrap up a simple HTTP GET endpoint in C#. If I was to add filtering for it, paging, and query capabilities – It would probably require me (at least) 3x the amount of time. Extrapolating 1.5 hours of development into 888 HTTP endpoints, becomes 1332 hours of software development. 1332 hours divided by 8 becomes 166.5 days of actual development. 166.5 divided by 5 days (working days per week) becomes 33.3 weeks of development. This translates into 7.9 months of development – Ignoring vacations and such. Hence, one man would have to work for roughly 8 months to produce what I did in one second, by clicking a button, and my computer spent 40 seconds delivering – Assuming we can extrapolate 1.5 hours into 888 HTTP REST endpoints. If we were to take this amount of time literally, and translate it into costs, this results in that creating code yourself becomes for this particular use case 4.795.200 times as expensive. Simply because 1332 hours becomes 4.795.200 seconds, and it took me “1 second of man hours” to create Magic’s 888 HTTP REST endpoints.

Of course the above is arguably “China math”, and there are many additional things to consider in a real solution, skewing the numbers towards one or the other directions. For instance, what about maintenance? But if we are to take the numbers literally, you will need roughly 5 million dollars to achieve the same thing manually coding, as you can achieve with $1 and Magic.

1 dollar with Magic brings you the same as 5 MILLION dollars without Magic

Yet again, take the above numbers with a grain of salt, since there are a lot of other factors you need to consider when choosing how to implement your solution. But the above are interesting numbers, and arguably impossible to prove “wrong”, although yet again I want to emphasise that they are “China math”

But that Magic saves you costs, resources, and therefor money – Is beyond any doubt for those with eyes to see for themselves. Now we also know that Magic results in faster end products, at least compared to everything that can compare itself towards Magic.

Identifying your Pearls

What is the real value of your company?

If your software is legacy garbage, what is your company’s true value then? I don’t really have to say this out loud to a seasoned manager, but it’s the relationships you have with your customers, combined with your database. The experiences your clients and customers have with you, combined with the information you have about these clients, is your company’s real value. Your database is the reasons why your key account manager can call up “John Doe” and ask him how his BBQ last Saturday was, and if he’s interested in purchasing your latest product, that outperforms the previous version by 1.8 times on all metrics – Closing the sale due to his existing relationship with Mr. Doe, becomes almost as the most natural thing in this world. Any person having any kind of experience with sales can easily agree with this.

However, if your existing software systems needs 30 minutes of finding Mr. Doe’s last activities, and the last phone conversation your key account manager had with him – Then your software system becomes an anchor that drags you down, instead of lifting you up. This implies that if you are to completely change your existing software, you must change it in such a way that you can still leverage your existing asset: Your database. Hence your next generation of software, must be able to bring the lessons from the previous generation of software with it, in order to provide value to your company, while still being fresh, modern, and blistering fast – Following all the modern best practices in regards to UX, security, and scalability.

Luckily, your database contains what we software developers refers to as “meta information”. This information allows us software developers to gain knowledge about the structure of your data. This structure can then be used to automatically recreate your software, and upgrade it according to modern standards, getting rid of all the legacy garbage you’ve been dragging around for a decade or more. Basically, this meta information allows us to recreate your backend software system, literally in seconds. Watch the following video to understand how, where I take an existing CRM system, that has been maintained for more than a decade, and arguably port it to a modern platform, getting rid of all legacy garbage in the process, and I am able to do it in 40 seconds!

I want to emphasise, that the above video demonstrating literally doing some 40-60 percent of the job of recreating your entire software system, and it is doing that job in 40 seconds. This allows us to create an entirely new software system, based upon your existing data, and its structure, and simply “apply” an entirely new software backend to it. New software that is highly optimized, extremely scalable, and super secure. Software that is created for the future, and not your past.

When asked how to build a house, others will start building. I will spend 10 years thinking about how I can build a house in 10 seconds. Then when I have succeeded, I will build thousands of houses in one hour.

The above process is unique for something I refer to as “Magic”, which is a proprietary tool, I have spent more than a decade of researching and building. Paradoxically, as I created it, I had to throw away 3 “legacy versions” of it myself, which wasn’t good enough for different reasons. Hence, I do as a preach – Few can object to that simple fact. Now it’s your turn to get rid of your old garbage, and upgrade your software infrastructure, and ready yourself for your future – Getting rid of the “past ghosts from previous times” in the process. Contact me below if you’d like to hear more about this. Or check out Magic for yourself if you’re curious, and technically savvy enough to understand the process.

The advantages of doing the “big rewrite”

Purely statistically, this is your current codebase

The big rewrite scares the living crap out of many software developers, and especially managers. The arguments against it are basically permutations of “We have spent a decade maintaining this codebase, and we have had dozens of employees working on it,  and you want to start all over again?” Of course, to an experienced software developer, the above argument is exactly why the code should be rewritten. If you don’t understand why, read the argument again, this time thinking about what it’s actually stating.

However, most managers believes that just because the spent 100+ man-hours creating the code in the first place, it will also require 100+ man hours re-creating the code. This is a fundamentally flawed argument, and has no hold in reality what so ever. In fact, I’d argue that every time you create the same piece of software, you can do it 5 times as fast. Hence, if you have created a system 3 times, the 4th time you create it, you can create it 125 times as fast as you did the first time. Simply because you at this point know everything that’s needed to wire the system together, and you are able to produce smaller and tighter code, and paradoxically the result also becomes better as a result of rewriting the system. I should know, having done this dozens of times myself. In fact, this is almost a “natural law of system development”, almost like Moore’s law. It’s difficult to believe in though, so let me illustrate my point, by showing you how I recreated a CRM system, wrapping 222 database tables, arguably replacing (at least) 50% of an existing legacy system, that had been maintained by dozens of software developers for more than a decade – And I did it in 40 seconds!

Don’t fear the big rewrite, fear the fear of the big rewrite

Facts are, there is a 95% statistical probability of that the code you currently love, in the codebase you have maintained for more than a decade, having dozens of software developers maintaining, is PURE GARBAGE! I should know having worked on dozens of legacy systems, spanning a range of more than 37 years of software development.

Download Magic here if you’re not afraid of the big rewrite

Creating a Web API backend with ZERO lines of code, and invoking it with ONE line of code

About a year ago I told an acquaintance of me on Twitter that I was working on a “zero lines of code” software framework. His reply was “I don’t believe in zero lines of code, but I believe in ‘low code’ frameworks”. In the end he was right, because I couldn’t reduce the LOC to ZERO. Regardless of how much I tried, I still ended up with ONE single line of code … 😉

Failure comes in many flavours, also sweet

Anyways, Magic is now at the point where I can literally wrap any MySQL or MS SQL Server database into a Web API backend, without having to write one single line of code. This works by having my computer read meta data from the SQL database schema, for then to generate or “scaffold” Hyperlambda code, that results in HTTP REST endpoints, for every single CRUD operation towards my database. So far so good, still at ZERO lines of code. Watch the video below where my computer is automatically creating 888 HTTP REST endpoints, and equally many code files, in some roughly 40 seconds.

Then comes the need for authentication and authorisation, in addition to that sometimes the automatically generated CRUD endpoints sometimes needs to evaluate custom SQL, and you can’t rely upon the “automatically generated” SQL that only allows you to do simply CRUD – Even though it’s obviously powerful, and allows you to create fairly complex stuff “for zero effort”. Watch the following video for a “deep dive into how Magic actually works” – That demonstrates it in more details, and goes through it in more details.

So far we’ve created ZERO lines of code, right – Even though we have arguably created an entire CRM system, with 888 HTTP REST endpoints, wrapping an extremely complex and rich database (Sugar CRM’s database, with 222 tables). Then unfortunately to the point where I “fail” – Since I needed to “sugar the pill” with one single line of code – Where I invoke any HTTP REST endpoint from Hyperlambda, using the “Evaluator” of Magic – Which becomes the one single line of code, proving how my friend was right, and that “Zero code frameworks don’t exists”.

Sorry, I failed 😀

Psst, if you want to read my DZone article about how to invoke HTTP REST endpoints with a single line of C# code, you can find it here