JavaScript, the Assembly Language of your Generation

Back when I was a kid, somewhere around the times when the Dinosaurs roamed the Earth, most programmers would solve their real problems using Assembly programming. Sure you could create some minor things in BASIC for instance, but all “serious” problems resorted to Assembly. Today we’re in a similar shift, where extremely high level abstractions are not yet (apparently) powerful enough to solve our “real problems”, so we’re left with coding in JavaScript. However, have no doubts, this is an anomaly.

If you attend a job interview 5 years from now, and you explain the guy interviewing you that you’re a skilled JavaScript developer, you’ll be laughed out of the office, the same way a skilled Assembly programmer today is. The reasons are summed up by Paul Graham.

A high level abstraction will always outperform a low level abstraction in productivity

The above lesson was something Graham learned in his venture ViaWeb. And guess what, an employer doesn’t care if you can create a rich web app in 20 days, if somebody else can create it in 5 days, solving his needs. An employer doesn’t care if you’re able to fluently speak Clojure, as if it was your native tongue, if some 5th generation system developer is able to create an Ajax web form 5 times as fast as you.

Even today an extremely skilled Assembly programmer can easily outperform a C++ developer, and implement something that’s at least 25% faster. However, those 25% are basically the difference between 0.00001 and 0.0000075 seconds. Do you really believe the end user is going to notice it …?

What the end user will notice though, is if you’ve spent 10 people working for 10 months, or 2 people working for 2 months to solve the problem. This will from the customer’s point of view literally translate into thousands of dollars, that the software vendor needs to earn, to make the project break even. In addition, your employer will definitely notice it if your competitor is able to deliver a product into production 8 months before you do.

Guess what, JavaScript is an extremely low level abstraction, and if you can avoid it, and use a higher level abstraction – You SHOULD!


NIST, “bcrypt”, Slow Hashing and Elliptic Curve

So, I am in this debate over at Reddit about whether or not I should encrypt my password file, or instead use bcrypt and “slow hashing”. I really didn’t want to go here, but since the argument has started exclusively evolving around “security best practices from NIST”, in addition to bcrypt, which is what NIST recommends developers to use to “secure their passwords” – I feel that I am left with no other choice but defend my view. Which unfortunately will look ugly for NIST.

NIST is an American institution. It’s an acronym, and it means “National Institute of Standards and Technology”. One of its purposes is to propose security standards and best practices for software developers. One of the things NIST has previously standardised is the usage of Elliptic Curve RNGs. RNG translates into “Random Number Generator”. In cryptography having cryptographically secure random numbers is imperative, since without a truly random number, you cannot create encryption keys that are secure. Implying if an adversary can somehow “predict” the output of your RNG, he can accurately re-create your private encryption key.

When NIST standardised the usage of Elliptic Curve RNG, they said that you “should” use two specific numbers, which really was up to the developer to provide himself, but NIST gave their advice on which numbers to put in. Several years passed, and some security expert asked himself the following question about this practice; WUT …?

After some time, a lot of math, and I am assuming a couple of later nights – This expert was able to prove that whoever knew the “distance” between these two numbers, would be able to predict all possible random numbers generated by the algorithm. The security expert even went as far as referring to this as a “backdoor”, and NIST had to apologise, and changed their standards, realising they had been literally taken with their pants down in these matters.

Then Edward Snowden came out and literally showed proof of that the NSA and the CIA had been for years trying to “infiltrate and bribe” standardisation organisations, to create backdoors into standards, which allowed them to access encrypted information. This (obviously) to a large extent explained why Elliptic Curve had been tampered with, though few were willing to say it out loud.

Today NIST has another set of “best practices”. These are practices for how to securely store your passwords, and it’s based upon “slow hashing”. NIST have even proposed a specific library to use for performing this task, and they’ve got hundreds of pages of documentation to show developers why they should choose this path. The problem is that their proposed solution the way I see it, is based upon “raw computational power”. And guess what …

If it boils down to “raw computational power”, it doesn’t take a rocket scientist to understand who’ll “win” here, does it …?

Competing with “raw computational power” against an adversary such as the NSA, CIA, FSB or Chinese intelligence – OR some mafia organisation for that matter, that have access to a botnet with a million computers in their possession – Is the equivalent of having a midget trying to beat Mike Tyson in a boxing match.

Now a midget can in theory beat Mike Tyson. However, not in a “fair fight”. If you gave the midget some sort of advantage, that Mike Tyson did not have, then for a David to give a Goliath a “whopping”, is actually quite easy. We can for instance arm the midget with a baseball bat? Or maybe a tazer? At which point all of a sudden Mike Tyson would be the guy in trouble.

PGP cryptography is that “baseball bat”. Instead of “slow hashing” your passwords, relying upon pure muscle to be able to find your passwords, you can instead simply encrypt your passwords – At which point of course the above datacenter would be left in the dark, and need a million year to figure out your passwords, even WITH physical access to your password file.

There is a saying that goes like the following; “Fool me once, shame on you. Fool me twice, shame on me”. NIST does not have your best interests in mind when they create their “security standards”. Believing they do, would be silly. They’re an American government institution, and just like the CIA, NSA, FBI, “whatever”, they want you to voluntarily hand over all of your data, and your customers’ data too. If they can “trick” you into believing that you’re actually secure as you do this, then they have created an excuse for you possibly, for becoming your customers’ Judas, such that you can’t be pointed at in a court of law for espionage. However, guess what! Just because somebody can’t prove you were the Judas, doesn’t mean you weren’t. Of course, not everybody knows these facts about NIST, which is why I am writing what I am writing here …

However, if you implement “bcrypt” just because NIST told you, you’re an idiot. At least after having checked out the history of Elliptic Curve and NIST’s recommendations here, and/or read what Edward Snowden has to say about these standards.

If I were to ever waste an hour reading what NIST told me where “best practices”, it would in fact be to figure out what NOT to do. NIST is, and have been, for a very long time, simply a branch of the CIA/NSA – And their recommendations are explicitly created in such a way that they shall have access to your data, and your customers’ data. And as they open up backdoors into your data for themselves, they open up backdoors to your data to Chinese intelligence, Russian intelligence, and probably also a couple of intercontinental mafia organisations too in the process.

If you still believe that “bcrypt is secure” after having read this article, then I am sorry to confess, that my best security recommendation to your boss, and your customers, are literally to CHASE YOU OUT OF THEIR BUILDING WITH A STICK!!

Here is my “weakly hashed” password file – Feel free to try to crack it

When it comes to security, there is a lot of dogmatic beliefs out there. For instance, some guy recommended that I hash my passwords thousands of times. This was due to that if my hashing algorithm took one second to execute, a Rainbow/Dictionary attack brute forcing my passwords, for then to perform a lookup towards the hash values of my password file, would simply not work. These are considered “best practice” in our industry, and you can find entire sections at StackOverflow.Com arguing for this approach. In fact, there are even multiple libraries written for this sole purpose.

There are two problems with that approach. Both problems arises from the fact of that Phosphorus Five is implemented in C#. This implies that what’s a “slow hashing function” in C#, can easily be “lubricated” in assembly or C to become blistering fast! The second problem is that each iteration of hashing would require some heap memory, making the garbage collector kick in every n times a user tries to login – Rendering the system for all practical concerns USELESS!

So instead of relying upon “best practices” in regards to this problem, I asked myself what IS the problem. Well, the problem is that if an adversary gains physical access to your password file for some reasons, he can gain access to your passwords. The first time we “fixed” this problem, we fixed it by hashing our passwords, for then to never store our passwords in plain text. Then some jerk came around and figured he could use a Rainbow attack to brute force your password file. This works in such a way that he generates the hash value for every possible combination of characters that could in theory be used as a password. Generating every single hash value, for every single possible combination of characters in the alphabet up to 8 characters in length, requires surprisingly small amount of time, and can actually be done in seconds, with very few resources. Then he could simply take an existing hashed password, find its instance in his “Rainbow database”, and such find your password.

So we started salting our passwords, with a “per user” salt, to make sure even if an adversary manages to crack one of the passwords in your file, he still won’t be able to do a lookup for multiple occurrences in your password file, having the same hash value. In addition, we started “slow hashing”. Slow hashing implies that we hash thousands of times, resulting in that generating the hash for a single password combination, takes at least one second. Implying that creating this “dictionary” of pre-hashed values would require too much CPU time to be of practical usage. First of all, this implies adding a LOT of CPU overhead to your application. Secondly, what is “slow” for your server, is easily within the reach of a teenager with $10,000 to rent a server farm for some few hours, and some small amounts of C/Assembly knowledge. What is slow for your server and C#, is basically peanuts for a million servers running Assembly code. An organisation such as the NSA, CIA or the FSB (**PUN!**) could eat through your “slow hashing” in milliseconds, without even noticing a “blip” on their server farm’s CPU usage …!

So you must assume that the FSB basically knows all of your password. Because this has been “industry best practices” for a decade or so, and hence “all” developers have chosen this path – Including yours … 😉

So I figured that the “best practices” in these regards were arguably broken, and effectively useless. So instead of doing a “slow hash”, I decided to rip up the problem by its roots, and instead storing the password file encrypted. Just to prove hos secure this is, I challenge my readers to figure out my password. Here is my password file …

Content-Type: multipart/encrypted; boundary="=-LLyo/DkZazvC4JmU6M3Qag==";

Content-Type: application/pgp-encrypted
Content-Disposition: attachment
Content-Transfer-Encoding: 7bit

Version: 1

Content-Type: application/octet-stream
Content-Disposition: attachment




Now try to figure out my password … 😉

Good Luck!

This of course implies that you can literally store your password file, as I have done above, as plain text on your blog. Which of course makes it much easier to create backups of your password file, in addition to providing much better security than “slow hashing”.

Due to that “slow hashing” have been our industry’s “best practices” for a long time, an estimated guess would be that 99% of all web apps in this world have password implementations, that could easily be hacked by a teenager, with $10,000 to rent a server farm, and some above average C/ASM knowledge, in a couple of hours …

If that makes your paranoid, I happen to know the solution to your problem 😀

Creating Hyperlambda from C#

I’ve just been refactoring and cleaning the code where I access my “auth” file in Phosphorus Five. The “auth” file contains the usernames/passwords and authorisation objects of Phosphorus Five. Since I am invoking Active Events parametrised from within these methods, I thought I’d take some time to explain this, since it provides an excellent use case of how to “create Hyperlambda” from C#.

The first thing you must realise about Hyperlambda, is that it’s simply a node structure. In fact, when your Hyperlambda has been parsed, its result is a simple graph object, encapsulated in the Node class of Phosphorus Five. This class contains lots of helper methods, but can be reduced down to the following pseudo code.

class Node {
    string Name;
    object Value;
    list Children;

The Children parts above is a list of Node children.

This trait of Hyperlambda, allows us to create Hyperlambda from C#. This might sound weird, but actually has lots of use cases. For instance, I wanted to encrypt my “auth” file when saving it, and decrypt it when loading it. I could of course have done this manually, by referencing BouncyCastle directly. However, I already have helper Active Events, that I can reuse, making my code much more condense, and allowing me to create more “DRY” code. DRY meaning “Don’t Repeat Yourself”. So what I ended up with was the following.

 * Implementation of saving auth file.
static void SaveAuthFileInternal (ApplicationContext context)
    // Getting path.
    string pwdFilePath = GetAuthFilePath (context);

    // Saving file, making sure we encrypt it in the process.
    using (TextWriter writer = new StreamWriter (File.Create (pwdFilePath))) {

        // Retrieving fingerprint from auth file, and removing the fingerprint node, since
        // it's not supposed to be save inside of the ecnrypted MIME part of our auth file's content.
        var fingerprint = _authFileContent ["gnupg-keypair"].UnTie ().Get<string> (context);

        try {

            // Writing fingerprint of PGP key used to encrypt auth file at the top of the file.
            writer.WriteLine (string.Format ("gnupg-keypair:{0}", fingerprint));

            // Encrypting auth file's content.
            var node = new Node ();
            node.Add ("text", "plain").LastChild
                .Add ("content", Utilities.Convert<string> (context, _authFileContent.Children))
                .Add ("encrypt").LastChild
                .Add ("fingerprint", fingerprint);
            context.RaiseEvent ("p5.mime.create", node);

            // Writing encrypted content to stream.
            writer.Write (node ["result"].Get<string> (context));

        } finally {

            // Adding back fingerprint to cached auth file.
            _authFileContent.Insert (0, new Node ("gnupg-keypair", fingerprint));

The above code is all the code I need to save my “auth” file, using a PGP key, stored in my GnuPG storage. Of course, if I was to use BouncyCastle directly, the above code would literally explode in size, and probably grow to at least one order of magnitude more in size. Instead I get to reuse my existing MIME Active Events, that allows me to simply supply a PGP key fingerprint, which will lookup my PGP key from my GnuPG storage, create a “text:plain” MIME envelope, encrypt that envelope, before it saves that encrypted envelope to disc. The thing you should pay particularly notice to above, is the parts where I construct my “node” object. The above code’s Hyperlambda equivalent would be the following.


Of course, arguably using a MIME envelope to serialise my auth file to disc, arguably creates some overhead. However, this overhead is a small price to pay for the simplified code. In addition it also allows me to easily extend my logic later, to for instance create multipart MIME envelopes, wrapping multiple auth files, encrypted with their own unique PGP keys. In addition to that it allows me to serialise binary data later into my auth file, etc, etc, etc. So the added overhead of constructing a MIME envelope wrapping my auth file, is really quite insignificant compared to its gains. Reading the file from disc and decrypting it, is equally easy.

 * Private implementation of retrieving auth file.
static Node GetAuthFileInternal (ApplicationContext context)
    // Checking if we can return cached version.
    if (_authFileContent != null)
        return _authFileContent;

    // Getting path.
    string pwdFilePath = GetAuthFilePath (context);

    // Checking if file exist.
    if (!File.Exists (pwdFilePath)) {
        _authFileContent = new Node ("").Add ("users"); // First time retrieval of "auth" file.
        return _authFileContent;

    // Reads auth file and decrypts it.
    using (TextReader reader = new StreamReader (File.OpenRead (pwdFilePath))) {

        // Retrieving fingerprint for PGP key to use to decrypt file.
        var fingerprintLine = reader.ReadLine ();
        var fingerprint = fingerprintLine.Split (':') [1];

        // Retrieving GnuPG key's password from web.config.
        var confNode = new Node ("", "gpg-server-keypair-password");
        var gnuPgPassword = context.RaiseEvent ("p5.config.get", confNode).FirstChild.Get<string> (context);
        // Retrieving the rest of the content of file.
        var fileContent = reader.ReadToEnd ();

        // Decrypting file's content with PGP key referenced at the top of the file.
        var node = new Node ("", fileContent);
        node.Add ("decrypt").LastChild
            .Add ("fingerprint", fingerprint).LastChild
            .Add ("password", gnuPgPassword);
        context.RaiseEvent ("p5.mime.parse", node);

        // Converting Hyperlambda content of file to a node, caching it, and returning it to caller.
        // Making sure we explicitly add the [gnupg-keypair] to the "auth" node first.
        _authFileContent = Utilities.Convert<Node> (context, node.FirstChild ["text"] ["content"].Value);
        _authFileContent.Add ("gnupg-keypair", fingerprint);
        return _authFileContent;

The above code will in a similar fashion to its save counterpart, decrypt my file with a handful of lines of code. Probably saving me hundreds of lines of code, compared to if I were to explicitly implement decryption instead. Its Hyperlambda counterpart would resemble the following.

 * Retrieving the password for our PGP key.

All in all saving me for hundreds, if not thousands of lines of code, allowing me to keep my code “DRY”, and reuse as much of my existing implementations as possible, without creating references between my two different projects. The last point is crucial. Because my auth file wrappers can be found in the project called “p5.auth”, while my MIME wrappers can be found in “p5.mime”. If you look at these projects in Visual Studio, you will notice that none of these projects have any references to the other, eliminating all the problems that such references might create, such as versioning problems, changing the signatures of my classes methods, etc, etc, etc. Resulting in a solution where I can reuse existing functionality from other projects, without any references between my projects at all!

I think that’s pretty cool! What do you think …?

An independent Security Expert’s Code Review of Phosphorus Five

It’s really quite fascinating what you can get people to do for you for free, if you just “adequately motivate them”, and give them access to your source code. I’ve had several security experts from Reddit over the last couple of days literally scrutinising my code with a microscope, looking for security holes. Especially one guy truly emerged as a champion in this process, Mr. Cifize. Cifize was able to find several weaknesses in Phosphorus Five, all of which are now tightened. If I were to hire people professionally to do what Cifize did for Phosphorus Five for free, it would probably have costed me somewhere between 10,000 and 20,000 dollars. I am of course very grateful to Cifize for what he have done for Phosphorus Five. Thank you Cifize 🙂


Although my existing password file was already protected quite well, Cifize pointed out that a brute force rainbow attack, done by an adversary who already had access to the file, could “reverse engineer” its passwords, by brute force. Hence, my existing server-side salting hashing logic of my user’s passwords, could probably need some tightening up. Hence, I followed Cifize’s advice, and significantly tightened the way I store passwords.

The way I chose to do this, was to make sure I encrypt the password file, with a 4096bits RSA PGP key. This key is internally stored on the server encrypted with AES, which makes it even tighter. The password used to release the key from the GnuPG keyring, is stored in web.config. While the private PGP key used to decrypt the password file, is stored in GnuPG. Since GnuPG stores its keys outside of the filesystem that Phosphorus Five has access to, this makes it almost impossible to retrieve the PGP key, even for an adversary with full “root” access (P5 “root” access) to your server.

So an adversary will need literally almost complete access to your server, simply to be able to decrypt your password file. And even at that point, if he should somehow be able to decrypt your passwords, they’re still internally stored as server-side salted hashed values. On could argue that this is close to insanity and pure paranoia in regards to security – But when it comes to security, you should be paranoid! Better to add some 5-6 additional extra layers of security, than one too few … 😉

Additional security fixes

In addition Cifize was able to find a place in the backup methods which could in theory make an adversary able to perform an SQL injection attack. Although this could only occur if an adversary somehow was able to trick a “root” account to import a malicious CSV file, I still chose to fix it, to be on the safe side.

In addition, there were some minor issues with the “” script, that installs Phosphorus Five on a Linux machine. To make sure I install your server now in adequately paranoia mode, I’ve completely removed all HTTP headers that can positively identify details about the underlaying technology.

Since security is a constant ongoing process though, I would like to encourage all my readers to send me an email, if you should discover a hole. In addition, I have created a “honey pot server” myself, which you are welcome to try to hack. If you want to have a go at trying to hack Phosphorus Five, you can do so here.

Yet again I would like to give my thanks to Cifize, who have proven to be an invaluable asset in this process. Thank you Cifize 🙂

You can download the latest version here

Please hack my server

Some developers are spreading vicious and incorrect rumours about Hyperlambda and Phosphorus Five on the internet. To combat these rumours, I thought I’d prove its security. The way I have chosen to do that, is to create a small page, allowing anonymous access to execute “eval” on my server. My intentions are to prove that developers sometimes mindlessly repeats dogmatic teachings, without understanding the context where they are relevant.

For instance, in Phosphorus Five and Hyperlambda “eval” is perfectly safe, if you use it correctly. This is because Hyperlambda has an “overload” of eval, which allows you to supply a list of legal Active Events to it, preventing the user from using insecure Active Events, that might produce dangerous side effects. This allows you to literally use [eval-whitelist] for some really interesting things, such as for instance creating “lambda web services”, where the client supplies the code to be executed, without compromising security. Of course, those spreading rumours about Phosphorus Five’s insecurity, simply “avoid adding this ‘tiny little detail'” as they are claiming “It’s insecure, it contains ‘eval’ all over the place”. This list of things they happen to exclude as they’re rambling on about Phosphorus Five’s “insecurity” is much longer too, but since this seems to be the “most dangerous thing about Phosphorus Five” – I thought if they can’t hack into it even though I have given them execute “eval” permissions, most other things they are claiming is probably not true either … 😉

You can find the app here. And you can also create HTTP POST requests towards the same URL, and provide your Hyperlambda as the body of your request, at which point the result of your execution will be returned back to your client. Below is the entire code that I used to create this page.

 * Creates a Hyperlambda "eval" page.

   * Web Service invocation.
   * Retrieving body of request, and executing it using [eval-whitelist].

 * Creates a default page, with a header and a paragraph.

     * Including Micro CSS file, serious skin, and fonts.

              innerValue:Hack my server challenge

             * CodeMirror instance.

                     * Retrieves code, executes it, and creates a modal window with
                     * the results of the execution.

Notice, you also have indirectly access to read from my MySQL database, since I have whitelisted a couple of the Hypereval “snippets” Active Events. If you can break into my server, using a security flaw in Phosphorus Five, I will publicly admit that Phosphorus Five is insecure, and allow you to fill an article at my blog, with whatever content you want to fill it with. I will basically allow you to write a blog at my website, spreading anything you want to inform my users about, related to Phosphorus Five, me, and my person. And I will link to that article with a bold “warning” from the project’s GitHub page, at the top of its file.

I have only one criterion. Obviously I cannot guarantee that Linux, Ubuntu, MySQL, Apache, or any of the other software pieces my box is using are safe – Even though I am pretty confident in that also these projects are pretty safe, considering the amount of security additions Phosphorus Five applies to your Linux box, as it is being installed. However, I will ask of you that you use a security hole in Phosphorus Five, and not a hole in any of its supporting software, and that you prove you did, by handing me a reproducible, which I can use to verify you used a security hole in Phosphorus Five, and not one in Linux or Apache etc …

Good luck! 😉

Epilogue; The next time you hear a mindless dogmatic belief, try to ask yourself 2 questions.

  1. What’s the motive of those putting forth the claims
  2. What is the context of what they are saying

For instance, lots of users will attack Phosphorus Five for using a server-side salt when hashing its passwords. This is completely irrelevant for a system such as Phosphorus Five, since its intention is to be an enterprise web application framework, with probably never more than maximum a 1,000 registered users. This means that the statistical probability of having two passwords collide, is so small, that the added complexity of creating a “per user based salt”, only results in added complexity, arguably reducing its security.

Security is always about “beating the odds”. And to be able to apply security adequately, you need to understand the context and the reasons for why we do things the way we do. Simply “following best practices”, without understanding the reasons why these were created in the first place, actually reduces security – Instead of improving it …

I also want to emphasise that my linux box, has not added a single additional security concern, beyond what the default installation script of Phosphorus Five applies. Still, I am willing to bet my honour on that you won’t be able to penetrate it – At least not due to a security hole in Phosphorus Five!

If you manage to somehow hack it though, you can send your reproducible to, together with whatever text you want to provide to my users to warn them from using Phosphorus Five.

The future is obsolete

For a couple of hundreds of years, we have lived in an economical society, where our labour possessed value for others. And as new paradigms came and went, such as the agricultural society, the industrial society, service based society, knowledge based society, we always seemed to be able to adapt. That stops in a very near future. Less than a decade from now, there is literally no job what so ever, that a computer or a robot for $1000 cannot do better, more accurately, and less expensive.

When Deep Blue humiliated Gary Kasparov in 1996/97, it took only 15 years before the average pocket calculator could beat the world chess champion, without even making an effort. Today the same research team that created Deep Blue, is creating doctors and physicians. This implies that 15 years from now, your physician and your surgeon will be a pocket calculator. We’re already seeing signs of this, such as self driving cars, automated supermarkets, the drones Amazon are using, etc, etc, etc. Steady but slowly, computers and robots will entirely replace every single task human beings are able to perform today. And I can’t wait!

Hallelujah, humans are obsolete soon!! 😀

If work was so valuable, the rich would keep all the jobs for themselves

Contrary to others, I literally can’t wait. I’m looking forward to this future, rejoicing, realising I can spend more time exercising my hobbies – Which paradoxically includes playing chess. The future is in fact so bright, I’ve gotta wear shades!

If you’re a youngster today, and you’re wondering about what education to take, let me give you my advice. First of all, when it comes to knowledge, skills, and “thinking”, there is not a single place in this world where you cannot be outperformed by the average pocket calculator 15 years from now. So spending your time at universities, reciting endless (and useless) information based facts, or the very art of logic itself, is completely useless. (Logic is Greek for “thinking”) – Hence your PhD 15 years from now, is about as useless as your ability to ride a horse and a carriage is for the average employer today. However, there are places where you can outperform computers, at least for the foreseeable future. This is where emotions are important, and the fact of that you’re actually a human being.

For instance, regardless of how intelligent our computers and robots becomes, and regardless of how perfect they can play the Saxophone, many people will still enjoy seeing an actual person playing the Saxophone as they’re visiting their local pub. Simply because we enjoy surrounding ourselves with talented human beings. The same way most people enjoy talking to an actual waitress as they’re ordering their coffee – If not for any other reasons than making a flirty remark to her as she takes your order.

People still enjoys watching Magnus Carlsen play chess, because of his personality, his body language, and the way he conducts himself in public. In fact, the (actual) world chess championship contest have at most a handful of spectators, since these competitions are being performed between super computers, which are a thousand times better than Magnus Carlsen, and performing their competition in dark rooms, deep inside of mountain halls, with no physical chess board, and no lights – In temperatures below freezing, to make sure their “brains doesn’t boil over”. This is the *actual* world chess championship. But nobody cares, because they find Magnus Carlsen, although significantly inferior to these machines, to be a nice human being, and they enjoy watching him playing chess. We even refer to Magnus as the “world champion”, even though we all know for a fact that the last human world chess champion was in fact Gary Kasparov in 1996.

If you’re basing your future on your cognitive superiority, and your logic, and your ability to “think”, you’re in trouble. A piece of advice, relax, be more human, be nicer, and calm down. Otherwise Deep Blue is going to kick your ass out of your office, while your boss is laughing, thinking “finally I got my revenge over that fucking arrogant asshole”.

Peace out,

“Just another Saxophone player”

And do you want to know a secret? You, yes you – As a system developer that is, is making everybody else in this world obsolete. There seems to be some kind of justice to the fact that somebody actually makes YOU obsolete …

… and most people will probably laugh, and cheer on me, as I do it!

Create web apps without JavaScript, CSS, HTML and C#

One of the things about Phosphorus Five, is that it allows you to create fairly rich web apps, without knowing any JavaScript, HTML, CSS, or C# – Or any other server side backend language for that matter. You can get away with this because most problems you’ll encounter in your web apps are already solved in the framework. For instance, to show a login dialogue is one line of Hyperlambda. To create a datagrid wrapping your MySQL database is 7 lines of declarative code. To create an Ajax tree view that shows the folders on your (server) disc is another 27 lines of code. A modal window, another 5 lines of code. Ask the user to confirm some action, 3 lines of Hyperlambda. Since Hyperlambda “feels” more like YAML than C# or JavaScript, it doesn’t feel like you’re actually coding.

Ask yourself how many of your problems are unique to your app. Chances are most of the things you need to do, have been done by thousands of other developers previously, in thousands of apps. If this wasn’t true, why do we find so many answers to our questions when we’re stuck and we choose to Google our problems? Phosphorus Five takes advantage of that fact, and facilitates for making it extremely easy to create reusable code and components. In fact, when you create your web apps in Phosphorus Five, you don’t create a monolithic app – It’s simply impossible. What you end up creating, is a whole range of reusable components, that you can put into your current app, and your next app. Since I have already created a whole range of different apps with Phosphorus Five and Hyperlambda, ranging from webmail clients to IDEs, there’s a component in Phosphorus Five for most of your needs. This literally allows you to create your app, almost exclusively using pre-built components, with a close to zero code base.

According to rumours I heard once, Visual Studio contains roughly 1.5 million lines of code. Hyper IDE contains 2429 lines of code, if you don’t count the comments and spacing. Visual Studio contains 617 times more code than Hyper IDE. Obviously Visual Studio contains tons of features that Hyper IDE does not. However, most of those features are things you’ll never miss, and things you didn’t even realise was there. In fact, I doubt there’s a single person in this world, including the project manager on the Visual Studio team, that can even mention every single feature in VS. Secondly, Hyper IDE has its own unique traits too, which Visual Studio does not. For instance, the ability to use it from your phone, access it from any terminal, granting write access to users only for specific files, etc, etc, etc. Being roughly 10x faster and more responsive for most tasks of course, also obviously helps. Being a gazillion times easier to extend, is also an extreme advantage. In fact, there’s a name for apps such as Visual Studio, it’s not a very flattering name either. The name we use for such apps is “bloatware”.

When I built Hyper IDE I already had all the main components I needed. This was the reason why I didn’t need more than 2429 lines of code. I had a CodeMirror editor widget, since I had already created one for Hypereval. I had a tree view Ajax widget, to show the files on disc. I had a toolbar widget, which I had already used in many other apps. I had modal windows from before. Etc, etc, etc.

When you create an app in Phosphorus Five, you don’t need to create as much code as you’ll need to create when you use “whatever else”. First of all, because most of your problems, are problems I have already solved. Secondly, the problems that you’ll actually need to solve, is by default possible to solve such that when you have solved them, you can reuse your solution in your next project – Without creating much dependencies between your apps might I add. This allows you to solve at least 80% of your problem, sometimes 100%, without resorting to writing JavaScript, HTML, or CSS. This again allows you to create rich and highly advanced server-side functionality, without having to write code in PHP, C#, or any other server-side programming language for that matter.

Every time you reinvent the wheel, you are stealing from your employer

The above might sound drastic, but you as a developer have a cost. This is often an hourly cost. Implying if you do things you don’t have to do, you are indirectly stealing from your employer. You probably don’t intend to steal – However, your vanity and your “not invented here syndrome”, prevents you from seeing the truth. So you end up spending your employers money, to learn something, that is completely useless – Because you want to become the “best developer in the world”. This is literally stealing!

If you’re an average full stack software developer, ask yourself the following questions; How much CISC x86 assembly code do you know? How does the pipeline in the latest Intel CPU work? What is the L1 cache size of your web server? If you’re an average software developer, you’ll have no idea how to answer those questions. Sure, you can probably easily find the answers by using Google, if you need these answers. But the fact that you are now probably frenetically Googling trying to find the answers to the above questions illustrates my point. Below is a piece of code. If you don’t count its comments and spacing, there’s 27 lines of code in it.

 * Creates a page with a tree widget in it, which displays
 * all the folders on disc.

     * Including Micro CSS file.

              innerValue:Tree view widget

             * This is our treeview widget.

               * This one will be invoked when the tree needs items.
               * It will be given an [_item-id] argument.
               * We simply list the folders of the item the tree needs
               * children for here.

Here is what it results in …

The only thing you need to know about the above code, is what it does, when to use it, and that it works! It’s secure, it’s extremely efficient on bandwidth, and it shows your folder structure on your disc. The same is true for your CPU. You don’t need to worry about how large its L1 cache is. You only need to know that it works. If you can understand how the above [.onexpand] Hyperlambda work, you can easily change it, to make it show other things, besides your folders.

In fact, if the above wasn’t true, the idea of “encapsulation” would be meaningless. You already use thousands of things you have no idea of how works. For instance, what is the implementation of System.String’s Clone method? I’ll give you a hint “return this;”. Now explain to me why that works, and doesn’t create problems if the same string is shared among multiple threads … 😉

99.9% of the world’s developers cannot answer the above question, without resorting to Google …

Sure, you could probably create your own custom tree view, using jQuery, PHP, CSS, JavaScript and C# etc – However, that would take you (at least) a month. It would contain thousands of lines of code in both JavaScript, CSS and C#/PHP. That month is a lot of money for your employer. Why would you want to do that, when all you need to do is to spend 5 minutes on creating 27 lines of code? Why would you spend your employer’s money basically for nothing, resulting in that your employer might go bankrupt, having to throw money literally out of the window – Resulting in that you’re eventually left without a job. Can you answer that question? In fact, let me show you your resume, and how it will look like 10 years from now for a potential employer. I should know, because I used to be the Windows API expert on the planet … 😉

  1. PhD in useless information a
  2. Masters Degree in useless information b
  3. 10 years of experience in useless information c
  4. Etc …

Do you want to know a secret? There are few differences between the art of flipping a burger, and the art of creating software. Both can be optimised to the point where MacDonalds can serve a billion burgers every day, with each employee creating an average of 100 burgers during a day. And if you choose to not use the better approach to create your software, because of vanity – Not only are you damaging your employer, but you’re also spending intellectual energy, on teaching yourself something, that is completely useless 10 years from now. Don’t believe me? Ask some WinForms developer how useful his knowledge is today …

… or a FoxPro developer for that matter … 😉

May I suggest you spend some time maybe instead learning how to play chess, or maybe the Saxophone? At least that way you’ll end up with a tangible value, that has a measurable positive impact on your life. Maybe start swimming or exercising? If you’re the average software developer, God knows you need it!

For God’s sake punk software developer, get a life!

Don’t waste your life, learning something, that nobody cares about, and that will never have any benefit for you or anybody else in the long run – At least not unless you do it in your own time, and you find it interesting and challenging …

For the record, this article, and these ideas, is probably the reason for this.

The more you fight me, and my ideas, the more you prove my point!

Epilogue – 13 minutes after I submitted a link to this article to /r/programming on Reddit, it had 75% down votes. I suspect I was the only one who upvoted it though, implying 100% of those who voted for it, voted it down. A qualified guess, is that I will soon be banned from the group. However, I’ve been wrong before. If I get banned, I will add their reasons for banning me here, which I suspect is “obsessive self promotion” – Although you can only submit links there, and the quality of this article is obviously quite high. A piece of advice when it comes to my ideas is to make up your own mind. Simply because the more “senior” your developers are, the more resistance they tend to show towards my ideas …

Software Architecture lessons from Norwegian Soccer History

Everybody who knows anything about Norwegian soccer history knows we’re like the worst soccer players on the planet individually, maybe the only ones worse are the Swedes and the Danish … 😉

Individually, it’s like we are physiologically incapable of playing the game. If you compare the average Norwegian soccer player to a Brazilian soccer player, it feels like you’re watching a Giraffe play tennis as you’re watching the Norwegian guy trying to “samba” his way through the field …

However, 25 years ago, there was this magical soccer trainer. His name was Egil “Drillo” Olsen (Google him). He broke all known soccer theory at the time, such as instead of having his players exercise on parts they were bad at, he had them exercise what they were already the best at. For instance, there was this one guy, who’s sole purpose was to shoot the ball, from one place on the field, to another place on the field. He was very good at shooting accurately and hard, but had no other skills really. Another guy’s job was to simply stay at that spot, pick down the ball from the air (his specialty), and score a goal. All other players were chosen from similar criteria, and none of these were “the best” in soccer in general. Individually, the team was arguably composed out of a bunch of “soccer retards”. However, this team humiliated many famous soccer teams, such as Brazil, Germany, England, “you name it”.

So let’s move this theory into software teams, and see if we can learn something. Let’s imagine we’re going to create a web application. OK, we know we’ll need JavaScript knowledge, so we find the best JavaScript developer in the world. We know we’ll need HTML and CSS, so we find the best CSS guy in the world. We know we’ll need C#, so we find the best C# developer in the world. However, these guys’ individual skills, become a liability to create our application. Simply because our app as a whole, doesn’t care about the brilliance of its C#, JavaScript or CSS code. Since all of these guys also are so darn special, and the most skilled in their area of expertise, they’re often also filled with vanity, and an incapacity to cut corners, in order to make their results *integrate* with the results of the guy sitting next to them. And they’ll often hold hourly long speeches about why they can’t do what they’re told to do, because of a, b or c. In fact, this problem is so common, it’s referred to as “Mythical Man Month”. If you haven’t heard of Mythical Man Month, I’ve included it below. Realise that this “theory” of software development is arguably proven scientifically, several times. It goes like this …

One man can create in on month, what two men can create in two months

Although we have known about Mythical Man Month for almost half a century, I suspect we don’t really know the reasons for it. At the least, we do NOT know how to fix it. However, I suspect Erik “Drillo” Olsen might actually have the solution for us.

The quality of your individual employee’s job is actually completely irrelevant. The only question you should be asking yourself is; “How can I integrate his results with the results of the guy sitting next to him”. This is such a universal truth, I suspect that you could literally handpick a bunch of medium skilled software developers, with the sole aim of asking how you can integrate their combined efforts, and these “mediocre” developers would far outperform a team of the same size, composed of the “best guys on the planet”. Kind of like Norway humiliated Brazil in soccer some 20 years ago …

So let’s move this on to *framework design*, which my previous article was all about. Well, the same laws applies. If you want to create a web app, you’ll need to pick a JavaScript library, so you pick the best. You’ll need to pick a CSS framework, so you pick the best. You’ll need to pick a database, so you pick the best. You’ll need to pick a server side programming language, so you pick the best. Etc, etc, etc …


Because the quality of your individual parts, is completely irrelevant, because you can’t integrate these into a “whole”, because they’re built with different philosophies, architectural design patterns, goals, etc, etc, etc. And the vanity and belief in that every single library is created “perfect”, makes its maintainers incapable of perceiving anything wrong with their tools. From their point if view, it’s already *perfect*!

I suspect this is the root of all failed software projects in fact. At least intuitively it feels like a universal truth. If you instead focus on integration from DAY 1, and cut a couple of corners if necessary on quality and performance on your individual components, the end result becomes far superior, and much more easily maintained. Simply because your different individual parts doesn’t scatter “all over the place”, and focus on their individual success. They are rather focusing on the group’s success as a whole. Hence, your framework, realises it is not existing as a bunch of single entities, but rather as a collection of entities, such that group dynamics kicks in, and decides the faith for it as a whole, and hence your ability to deliver good quality end results to your clients at deadline. So hence, an inferior software development framework measured on its individual parts’ performance, will still run in circles around the combined efforts of the “best individual parts in the world”.

In fact, this also consistently improves security too, even to the point where you’ll have to cut corners, arguably from an individual point of view reducing security. The last Efail security concern about encrypted emails for instance, was not a problem with encryption standards, it was not a problem with email clients. It was in fact a problem where you could not put the blame on any single individual component in your system. Still the security flaw was as real as daylight. I happen to have this knowledge from “the best MIME library creation expert on the planet”. This was a bug in how the encryption libraries were *integrated* into the email client! So arguably, even to the point where you’ll have to “cut corner in regards to security”, security as a whole still improves, simply by realising it’s not about the individual component’s performance. It’s about “the group” as a whole (framework), and its ability to perform its task as a whole!

Mythical Man Month … What month did you say …?

Game Theory and Framework Architecture

Nash proved more than 50 years ago that if everybody competes for the same prize, we would only inflict harm, not only to ourselves, but also to the group as a whole. Or at least, this is true in a “zero sum game”, implying games where there can only be one winner. According to legend and Hollywood, his ideas were conceptualised as him and 3 friends of him were trying to pick up a beautiful blond girl at a bar. You can see this depiction below. His theories were later referred to as “Game Theory”, and they tell us something about how we can both succeed as individuals, and as a group. The paradox is that a web application framework, is actually a “group”. It is not a single entity, but rather a collection of tools, that works together the same way a “group” works together. For instance, there’s Ajax, database layers, markup generation, etc, etc, etc. All in all, resulting in that the laws of “group dynamics” are key to understand how to best implement and create a web application framework.

Phosphorus Five is a “second best web application framework”. For instance, it’s very difficult, if not flat out impossible, to re-create a social media website such as Facebook or Twitter with it. It also does not incorporate a MongoDB or Couchbase database adapter, which according to dogma are among some of the most scalable databases in the world today. Instead it contains only a MySQL database adapter, allowing you to create your own MongoDB/Couchbase wrapper yourself in C# if you wish. It does not use the latest and hottest C# features, which has the additional advantage of that it’s more easily ported to Mono, and doesn’t require you to understand all the latest features from Redmond to understand its code. Hyperlambda is slower than C#, and has less syntax than both LISP and YAML. It does not contain a rich JavaScript API. In fact, it barely contains any JavaScript at ALL! I could go through every single feature in Phosphorus Five, and illustrate how it consistently chooses the “second best” alternatives where it can. However, the end result, of its combined effort, becomes very weird due to this decision, and due to “Game Theory” and “Group Dynamics”.

For instance, with Phosphorus Five, you can create web applications without knowing any C#, JavaScript, CSS or HTML. In fact, the only thing you’ll need to learn in order to create highly rich web apps, is the difference between a colon (:) and double space (”  “). These are the two only syntactic tokens in Hyperlambda. With Phosphorus Five you can create a modal window with 5 lines of code. You can traverse your folder structure on disc in a tree view, with 25 lines of code. You can create an Ajax MySQL datagrid with 7 lines of code. Etc, etc, etc. The reasons for this, is because none of its parts are “blocking” any other parts. The “individuals in the group” becomes 100% perfectly untangled.

If I were to “micro optimise” Phosphorus Five, the same way lots of other frameworks have been micro optimised, the end result of me trying to become the winner on all parameters, would be more syntax, more complexity, more entanglement, more cognitive noise for you as you create your code, more overhead, etc, etc, etc. The individual parts of Phosphorus Five would basically “block” each other. Phosphorus Five incorporates the ideas coined by Nash more than 50 years ago in its core! And the end result, of consistently choosing to be number 2 instead of number 1, is that as a *WHOLE* it wins. Don’t believe me? Watch what a Nobel prize winner in math has to say about this …

Whenever “group dynamics” applies to your problems, you should never strife for winning the first place in any individual parts of your system. Instead, you should try to become second best on all parameters. The reasons is because as you do, the sum of your work, will inevitably be that your collective efforts becomes the winner. Nash proved this, and got the Nobel Prize for it. And I believed him, and I implemented Phosphorus Five in accordance to his theories. Resulting in … 😉