Invoking a PGP encrypted lambda web service with 5 lines of code

I have just now released a new version of Phosphorus Five, where one of the major new features is that it is now dead simple to invoke and create “PGP enabled lambda web services”. A “lambda web service” is basically a web service where the client supplies the code that is to be executed in the web service endpoint. Since this is done through a cryptographically signed HTTP request, it’s actually highly secure, since this allows you to whitelist only a bunch of specific pre-defined PGP keys, and decide which Active Events the caller is legally allowed to execute on your server. You can also control which Active Events a caller is allowed to execute according to which PGP key was used to cryptographically sign the invocation.

If you’d like to test this, you can immediately do so in fact, by first downloading the source code for Phosphorus Five, get it up running, start “Hypereval”, and start invoking lambda web services on my private server. My server is at, and the first thing you’ll need to do, is to import my public PGP key into your own server. This can be done by pasting the following code into “Hypereval”, and click the “lightning” button, which will execute your code.


The above code will result in something resembling the following.

My server’s PGP key is as follows.


The above snippet will import my public PGP key into your server, and display the “fingerprint” for my server’s PGP key in a modal window. This is important to later use as you invoke web services on my server. After you have copied the above fingerprint, just click anywhere outside of the modal window to close it, and replace your code in Hypereval with the following code.

 * Invoking web service, passing in Hyperlambda to
 * be evaluated on my server.
     * This code will be executed on my web server!
    .foo:This code was evaluated on the web service endpoint
      =:" "
 * Displaying a modal widget with the result of
 * the web service invocation.

I want to emphasise that the above Hyperlambda was actually executed on *MY WEB SERVER*! And this was done *SECURELY*! The way I have setup my web service endpoint, is to simply use the default implementation. This is a “Hypereval” page snippet, which you can visit with a normal browser by going to This will create an HTTP GET request, which will load the web service endpoint’s GUI, allowing you to play around with code, and execute it on my server, using a CodeMirror code editor. Click CTRL+SPACE or CMD+SPACE (OSX) in it to trigger AutoCompletion.

If you create an HTTP POST request towards the same URL, this will assume you want to create a web service invocation towards my server, and act accordingly. The above [micro.web-service.invoke] Active Event is simply a convenience wrapper to create a client connection towards such a web service endpoint.


What is actually occurring beneath the hoods, is that a MIME envelope is created internally. Then the entire MIME message is cryptographically signed, using your server’s private PGP key. This signature is verified in my server when the MIME message is parsed, and only if the signature verifies, the execution of your code is allowed. Before it is being executed though, if will invoke [hypereval.lambda-ws.authorize-fingerprint], to check if the PGP key that signed the invocation has “extended rights”. This allows me to “whitelist” things that I normally don’t consider safe only for a handful of PGP keys which I trust comes from sources that I know I can trust. For instance, if a friend of mine had a Phosphorus Five server, and I trust his server, I could allow him to create SQL select invocations, etc – While a normal request, originating from an untrusted source, does not have these rights. I can also choose to entirely shut down access for all keys except a bunch of pre-defined “whitelisted” keys if I want to.

When the web service invocation returns to the client, it will verify that the returned request was cryptographically signed with the [fingerprint] I supplied during invocation, and if not, it will not accept the invocation, but rather throw an exception. This completely eliminate any “man in the middle” attacks, assuming I have verified the PGP fingerprint of the server I am invoking me request towards.

If I wanted to encrypt the entire invocation, I could add a simple [encrypt] node, and set its value to boolean “true”, at which point both the request from my client, and the response from the server, will be encrypted using PGP cryptography. Below is a code snippet demonstrating this.

 * Invoking web service, passing in Hyperlambda to
 * be evaluated on my server.
     * This code will be executed on my web server!
    .foo:This code was evaluated on the web service endpoint
      =:" "
 * Displaying a modal widget with the result of
 * the web service invocation.

Notice, the only difference between my first code and this last snippet is the [encrypt] argument, having a boolean “true” value. This will encrypt the request for my server’s PGP key, and when the response is created on my server, it will encrypt the response for your PGP key. All of this happening “for free”, without you having to do anything to implement it.

I can also submit multiple [lambda] nodes, which will be executed in order of appearance, and I can also supply an ID to each [lambda] node, which helps me identify each snippet evaluates on the server. Below is an example.

    .foo:This code was evaluated on the web service endpoint
      =:" "

The above will display as the following when being executed.

In addition I can also pass files back and forth between my web service endpoint and my client. I can also reference these files from inside my [lambda] nodes, but this is another article later. However, I want to emphasise that files passed back and forth between the client and the server is serialised directly from disc to the HTTP connection, and hence never loaded up into memory, allowing me to create “ridiculously large” HTTP web service requests, without exhausting neither the server nor the client in any ways. Still everything transmitted is easily PGP encrypted, by simply adding a simple argument to my invocation.

Internally what occurs, is that I actually create a MIME request and response, which internally is usingMimeKit, which is the by far fastest MIME parser in existence out there for .Net/Mono. I have tested it myself and serialised 5,000 5KB small images, and it was able to encrypt and decrypt 5,000 images in roughly 10-12 seconds. And since the Open PGP standard dictates that PGP encrypted envelopes will actually be compressed (think “zipped”), this has the additional bonus of significantly reducing the bandwidth consumption both ways. In general, you can expect the request and response to be some 5-10 percent in size compared to transmitting the files as “raw” (unencrypted) data.

Creating your own web service endpoint

If you’d like to create your own web service endpoint, you’ll basically only have to open up for a couple of URLs using “Peeples”. This allows access to the “guest” account to access the URLs necessary to evaluate a web service invocation on your server. Notice, you can still restrict who actually have access to your server though, by changing the settings for your lambda web service endpoint. Below is what you’ll have to paste into the Access object list of your “Peeples” module.


The first part is to enable your actual web service endpoint, and the second part is to enable clients to download your PGP key. This should look like the following in your Peeples module.

Make sure you click “Save” after having pasted the above into Peeples. To change the settings of your web service endpoint, open up Hypereval and load the “lambda-ws” page, and read the documentation for it.

Download Phosphorus Five here.


PGP based server to server communication

Using PGP when two server communicates with each other has a lot of advantages, such as among other things reducing the probability of a “Man In The Middle” attack, by cryptographically signing and encrypting data sent from one server to another.

In the upcoming 8.4 release of Phosphorus Five I have made this much simpler. First of all, when you install your server, you can check of simple checkbox, and have your server’s public PGP key transferred to your configured key server. The default key server used in Phosphorus Five is “”, but this can easily be changed in your web.config file.

Secondly, when some MIME envelope is parsed, and it has been cryptographically signed, Phosphorus Five will automatically retrieve the public PGP from your configured key server, and install it into its PGP context.

Thirdly, I have created lots of meta PGP key retrieval URLs for a default Phosphorus Five installation, allowing a server to automatically communicate and send public PGP keys back and forth. For instance, if you need to securely communicate with a server using PGP cryptography, you can simply request the server’s base URL and append “/micro/pgp” to it. At which point the server’s public PGP key will be returned as ASCII armoured text. Notice, you’ll have to use “Peeples” to explicitly allow for accessing this URL if you wish for non-root accounts to be able to retrieve keys. Requesting my personal development server’s main PGP key for instance returns the following.

Version: BCPG C# v1.8.1.0


In addition, you can also list all public PGP keys a single Phosphorus Five server has by requesting the url “/micro/pgp/list”, which for my server yields the following (Hyperlambda).

    :Dummy Testing Key Not in Actual Use <>
    :kgkgiygiugiugiyg iugigiug <>
... etc ...

… or you can query for specific keys, using a URL such as for instance “/micro/pgp/d9d9a341717d93ce911958aeddbb618d4f2ac9a9”. Which yields the following for my server.

Version: BCPG C# v1.8.1.0


You can also of course return multiple keys at the same time, by instead passing in things that will be matched as the “identity” of the key, such as for instance “/micro/pgp/Hansen”, which will return all keys having “Hansen” somewhere within their identity.

All in all, this creates some pretty cool opportunities for secure communication, allowing for meta retrieval, having automated processes retrieve server keys, and such immediately establishing a secure and encrypted communication channel.

I will also implement more of these types of “convenience” methods and functionality before the upcoming 8.4 release, allowing you to do lots of other interesting things too. However, that was that for today 🙂

Implementing Blowfish password storage

When it comes to security, no amounts of additional layers of security can be redundant. Even though the passwords for Phosphorus Five is already ridiculously secure, due to being PGP encrypted in a file outside of the file system accessible for the system, it doesn’t hurt to implement “slow hashing” – Even though I don’t trust it alone. Simply since it’s first of all a moving target, implying you’ll constantly have to add more workload for your own server to be secure. Secondly because when your server and your adversary’s server are 15 orders of magnitude apart in processing power, making sure your hashing algorithm is secure if it’s only based upon “slow hashing”, simply due to the nature of math, would require so much time to execute locally on your server, that it would require minutes for your users to simply be able to log in.

Regardless, I have now added Blowfish and bcrypt “slow hashing” storage of passwords in the upcoming release of Phosphorus Five. The amount of workload required to hash your passwords are set to 10 by default, but can easily be modified in your web.config. In addition each user now has his own unique salt which is used during hashing, which of course further eliminates an adversary’s ability to crack your passwords, since even two similar passwords in your password file, will still be hashed differently, due to having different salts.

In addition I have included parts of the client’s fingerprint when creating a persistent cookie on disc, if the user chooses to “Remember me” on a specific client. This further reduces the potential for credential “cookie theft”, where an adversary can pick your your cookie somehow, and use it to gain access to the system. It does however invalidate your persistent cookie if you upgrade your browser, but I think this is a small price to pay for the additional security it creates. Needless to say, but this also implies that the persistent credential cookie sent back to the client, is also not the same as the hashed password stored in your authentication file. Since the credential “ticket” sent to the client is hashed, from the Blowfish hashing result, and before it’s hashed it adds the client’s fingerprint (UserAgent and supported languages), and since it’s hashed using SHA256, which has an entropy of 2 to the power of 256, which equals 1.15e+77, which is almost the same number as the number of elementary particles in the observable universe – This makes it as far as I can see redundant to use “slow hashing” on cookie verification, and hence only when the user explicitly needs to log in, there will be a roughly 1-2 second delay while logging in, to perform the slow hashing.

Check out the code here. Using bcrypt was surprisingly easy I must confess. Literally two lines of code after having added the bcrypt package. Really nice work by the developers of bcrypt I must confess 🙂

This means that in the upcoming release, at least when it comes to authentication, Phosphorus Five literally has the same amount of security features that heavy duty things such as Linux and FreeBSD has …

… in fact, arguably better. But don’t tell Linus that … 😉

Access control in Phosphorus Five

I have just now significantly refactored authorisation or access control in Phosphorus Five, as you can see from its code. In addition, I’ve also removed a couple of “anomalies”, which arguably were bugs in its code – Some quite severe too may I add. Hence, I wanted to write up about how access to an object is granted or denied in Phosphorus Five, hopefully allowing you to more easily create your own access objects, granting or denying access to specific parts of your Phosphorus Five installation.

First things first, an access object in Phosphorus Five determines whether or not one or more roles have access to some part of your app. The role is the name of the access object’s root node, and whether or not it grants access or denies access to that role to the part in question, is determined according to its name, which can end with either “.deny” or “.allow”. Secondly access objects are “cascading”. What I mean by that, is that they obey to similar rules as a CSS selector. For instance, if I deny access to the path “/foo/” to some role, then unless explicitly overridden in another access object, that same role will be denied access to also “/foo/bar/”.

In addition you can create access objects that are referencing all non-root roles, by creating an access object with a role name of “*”. This implies that the access object is for all roles in the system, except the “root” role, which has access to everything always, and cannot be restricted in any ways. In addition each access object have a “type”. The type declaration of my access objects allows me (or you) to extend the access system, by creating your own types of access objects. By default though, there are the following access objects in Phosphorus Five.

  • p5.module – Determines if access to module is given or not
  • – Determines read access to files or folders on disc
  • – Determines write access to files or folders on disc
  • p5.hyper-core – Used in Hyper Core to determine access
  • p5.system.platform.execute-file – Used to determine if user has shell execute access to a file on disc

All of the above types are expected to have one of “.allow” or “.deny” after their names. If I wanted to grant access to the “foo” user to write to the files within the folder “/foo/bar/” for instance, I could create an access object resembling the following.


The above would allow all users belonging to the “foo” role to write to all files beneath “/foo/bar/”. Though it presents us with a dilemma, which is that it also allows the user delete the folder or rename it. This might not necessarily be what you want, so you can further restrict this operation, by adding another (parametrised) access object to your list of access objects.


Notice the [exact] parts above. Since a “deny” object always have precedence when two access objects have the same path and role name, if the user tries to rename or delete the “/foo/bar/” folder itself, the last access object from above will have precedence, and hence prevent the user from deleting or renaming the folder itself. However, since the last access object from above has an [exact] argument, it will only match the specified path, if it is exactly “/foo/bar/”. Hence, in our above example we first allows the user to write to everything inside of the folder “/foo/bar/”, for then to deny him to change the “/foo/bar/” folder itself. This gives the “foo” user complete control over everything inside of the “/foo/bar/” folder, but not the folder itself. An access object can be parametrised with the following arguments.

  • [exact] – Requires an exact match
  • [file-type] – A list of pipe separated file extensions, such as e.g. “hl|md|js”
  • [folder] – Requires the path to end with a “/” for the access object to be a match

This gives you an enormous flexibility, allowing you to for instance allow the user to only write to JavaScript and HTML files, restricting write access to all other files in the same folder. Or for instance allowing the user to write to all files inside a folder, but not to create, delete or rename folders. Etc, etc, etc. Below is an example of granting the “designer” role write, create and delete access to HTML, CSS and PNG files for instance inside your “/foo/bar/” folder.


If you in addition want to allow the designer role to create folders too, you can accomplish that with the following.


The above allows the “designer” role to create, delete, or rename JavaScript files, HTML files and PNG files inside of the “/foo/bar/” folder. It also allows him to create, delete, or rename existing folders inside of the “/foo/bar/” folder, but it prevents him from editing or deleting the actual “/foo/bar/” folder itself. By using the “*” role, you can also give all user access to write to files in some specific folder, and then afterwards restrict one or more roles. The following code allows everybody except the “guest” account to write to HTML files inside of your “/foo/bar/” folder.


The above logic will work since an explicitly named access object (our “guest” object from above), will always have precedence over an “*” object. Since all IO operations will check to see if the user has access to the file according to the access object list, this creates a fairly secure way to grant or deny users access to parts of your Phosphorus Five installation. You can also create your own types of access objects, extending the authorisation features of Phosphorus Five with your own logic – However, that is the subject of another article later …

Password entropy

Ask yourself the following question, which of these two passwords are more easily hacked?

  1. zXHq2$&#
  2. This Is A Password With Some Random Words

The answer might surprise you. If we assume that the user can create a password consisting of 8 characters and each character can be one of 26 capital letters, 26 small letters, 10 different numbers, and a total of 10 different special characters, we have a total amount of different possible combinations of 26+26+10+10 ==> 72 different characters. 72 to the power of 8 ==> 7.2e+14, becomes a number of 7,200,000,000,000,000 different combinations.

The English language contains roughly 150,000 different words. This implies that even assuming every single word in our above phrase starts out with a capital letter, this becomes a total of 150,000 to the power of 8 for a sentence with 8 words. The result of that becomes 2.5e+41. So in fact, that last password from above, is 27 orders of magnitudes more difficult to crack. This is a 1 with 27 zeros behind it!

The last password from above hence is 1,000,000,000,000,000,000,000,000,000 times more difficult to crack!

This implies that if an adversary needs 1 year to crack your 8 character password by brute force, he’ll need 1,000,000,000,000,000,000,000,000,000 years to crack your 8 words password!

Creating passwords with few characters, resembling “rubbish” such as our first example from above, actually provides false security. Simply since to a computer, trying out every single combination of 8 different random “gibberish characters”, is in fact a quite simple task. In addition, to a human being, the last password example from above, is probably hundreds of times easier to remember, than the first password, implying the user doesn’t even need to write down his password to remember it.

Last year the developer who “invented” the above “gibberish” password regime actually publicly put forth his regrets, because 8 random characters, simply doesn’t provide enough “entropy”, for a safe and secure password regime. Entropy is what we measure password strength in. In Phosphorus Five’s upcoming release, one of the things I have changed, is its (default“password regime”. Instead of requiring the user to type in at least on number, one capital letter, one small letter, and one special character, and at least 8 characters in size – I have simply removed all restrictions, except requiring the password to be at least 25 characters long. Allowing you to for instance use a password such as our second example from above. This allows you to use a password that is 1e+27 times more difficult to crack. In addition it allows you to use UNICODE characters, allowing you to create your passwords as Chinese sentences, Russian sentences, or (my native tongue) Norwegian sentences. Literally making it mathematically *impossible* to “crack” your password with brute force.

Now I am a Norwegian native, extending my 150,000 English base line with some additional 70,000 words (the vocabulary of Norwegian). In addition I know some Spanish, some few words of Greek, Italian, French, Arabic, Persian, etc. Extending my base vocabulary with an additional 100,000 per language, since an adversary unless he knows me in person, wouldn’t know what words I know in any of these languages. This becomes a base number of 150,000 for English, plus 100,000 for French, plus 120,000 for Spanish, 70,000 for Norwegian, 100,000 for Greek. Let’s round it of to 750,000. Then comes the fact of that I can start every word with a Capital letter, or not, start only the first word with a capital letter, or not, split words with “_”, “-“, or ” “, etc – Making my “base line” increase by at least 10x, which equals a base line of some 7.5 million. This becomes 1.0e+55, which equals the following number of combinations.


The number of elementary particles in the known observable Universe is 1.0e+80. This makes the job of trying to brute force a password with 8 words comparable to naming every single elementary particle in the observable universe! Still, I could easily remember my passwords, such as the following illustrates.


The above would in fact be a very simple password for my brain to remember, and it’s got **9** words, in three different languages, including one dialectic word … 😉

If you’re a system developer, and you’re about to create a password regime for your users, forcing your users to create “gibberish” passwords is actually counterproductive, and creates false security. The best security is in fact to (almost) entirely drop your password restrictions. Math has already scientifically proven this to be a fact!

The additional bonus of course becomes that it makes an entire subject of security obsolete; Per user based server side salts! Since it increases the entropy of your passwords to the point where a Rainbow/Dictionary attack, even having physical access to your password file, would require a computer larger than our observable universe to simply calculate all possible combinations, including a hard drive a 100 trillion times larger than a Galaxy!

Do you trust your RNG?

RNG translates into Random Number Generator, and is at the heart of cryptography. If an adversary can somehow predict your RNG’s output, he can effectively “guess” your encryption keys. There are real valid reasons for why you shouldn’t trust your RNG, depending upon your “paranoia level”. For the average user storing his TODO list encrypted on the web, this has probably few if any implications. However, for a highly paranoid organisation or individual, history have shown us that you probably shouldn’t trust your RNG. Creating truly random numbers without some sort of organic input, is by the very definition of the task literally impossible.

Some developers have proposed suggestions to solve this. All of the best and most paranoid implementations adds some sort of “organic input” to the mix. This can be having the user take a photo that he uses to seed his RNG implementation, listen to static noise over for instance a modem, or read some random bytes from your hard disc. Simply put because a computer cannot create truly random numbers without some sort of organic input.

The way I solve this in Phosphorus Five, is by allowing the user to create an “organic seed” during installation. This seed is cryptographically stored with a private PGP key, which is created by seeding the RNG with the salt the user provides. Below is a screenshot of how this looks like in the UI.

The salt the user applies above, is something he can provide for himself, and this is used to add to the existing entropy of the salting of the RNG from BouncyCastle, before the PGP key is created, that is used to cryptographically secured store the salt. This allows me to later easily create any true RNG number in the system, even if it should be proven in the future that the RNG implementation of BouncyCastle has weaknesses.

By default I use just a cryptographically secure random number, not bothering the user to even ask for a manual salt though, since this could arguably be considered “nuclear rocket security”, and would for the average John Doe be like hunting down a sparrow with a battleship. However, all in all, a pretty rock solid security implementation I’d say, adding that tiny little difference into the mix. So no, you shouldn’t trust your RNG. History has proven that this is probably not wise, at least unless you somehow organically seed it before you start extracting random numbers from it to create cryptography keys.

Only the paranoid survives – CEO of Intel

NIST, “bcrypt”, Slow Hashing and Elliptic Curve

So, I am in this debate over at Reddit about whether or not I should encrypt my password file, or instead use bcrypt and “slow hashing”. I really didn’t want to go here, but since the argument has started exclusively evolving around “security best practices from NIST”, in addition to bcrypt, which is what NIST recommends developers to use to “secure their passwords” – I feel that I am left with no other choice but defend my view. Which unfortunately will look ugly for NIST.

NIST is an American institution. It’s an acronym, and it means “National Institute of Standards and Technology”. One of its purposes is to propose security standards and best practices for software developers. One of the things NIST has previously standardised is the usage of Elliptic Curve RNGs. RNG translates into “Random Number Generator”. In cryptography having cryptographically secure random numbers is imperative, since without a truly random number, you cannot create encryption keys that are secure. Implying if an adversary can somehow “predict” the output of your RNG, he can accurately re-create your private encryption key.

When NIST standardised the usage of Elliptic Curve RNG, they said that you “should” use two specific numbers, which really was up to the developer to provide himself, but NIST gave their advice on which numbers to put in. Several years passed, and some security expert asked himself the following question about this practice; WUT …?

After some time, a lot of math, and I am assuming a couple of later nights – This expert was able to prove that whoever knew the “distance” between these two numbers, would be able to predict all possible random numbers generated by the algorithm. The security expert even went as far as referring to this as a “backdoor”, and NIST had to apologise, and changed their standards, realising they had been literally taken with their pants down in these matters.

Then Edward Snowden came out and literally showed proof of that the NSA and the CIA had been for years trying to “infiltrate and bribe” standardisation organisations, to create backdoors into standards, which allowed them to access encrypted information. This (obviously) to a large extent explained why Elliptic Curve had been tampered with, though few were willing to say it out loud.

Today NIST has another set of “best practices”. These are practices for how to securely store your passwords, and it’s based upon “slow hashing”. NIST have even proposed a specific library to use for performing this task, and they’ve got hundreds of pages of documentation to show developers why they should choose this path. The problem is that their proposed solution the way I see it, is based upon “raw computational power”. And guess what …

If it boils down to “raw computational power”, it doesn’t take a rocket scientist to understand who’ll “win” here, does it …?

Competing with “raw computational power” against an adversary such as the NSA, CIA, FSB or Chinese intelligence – OR some mafia organisation for that matter, that have access to a botnet with a million computers in their possession – Is the equivalent of having a midget trying to beat Mike Tyson in a boxing match.

Now a midget can in theory beat Mike Tyson. However, not in a “fair fight”. If you gave the midget some sort of advantage, that Mike Tyson did not have, then for a David to give a Goliath a “whopping”, is actually quite easy. We can for instance arm the midget with a baseball bat? Or maybe a tazer? At which point all of a sudden Mike Tyson would be the guy in trouble.

PGP cryptography is that “baseball bat”. Instead of “slow hashing” your passwords, relying upon pure muscle to be able to find your passwords, you can instead simply encrypt your passwords – At which point of course the above datacenter would be left in the dark, and need a million year to figure out your passwords, even WITH physical access to your password file.

There is a saying that goes like the following; “Fool me once, shame on you. Fool me twice, shame on me”. NIST does not have your best interests in mind when they create their “security standards”. Believing they do, would be silly. They’re an American government institution, and just like the CIA, NSA, FBI, “whatever”, they want you to voluntarily hand over all of your data, and your customers’ data too. If they can “trick” you into believing that you’re actually secure as you do this, then they have created an excuse for you possibly, for becoming your customers’ Judas, such that you can’t be pointed at in a court of law for espionage. However, guess what! Just because somebody can’t prove you were the Judas, doesn’t mean you weren’t. Of course, not everybody knows these facts about NIST, which is why I am writing what I am writing here …

However, if you implement “bcrypt” just because NIST told you, you’re an idiot. At least after having checked out the history of Elliptic Curve and NIST’s recommendations here, and/or read what Edward Snowden has to say about these standards.

If I were to ever waste an hour reading what NIST told me where “best practices”, it would in fact be to figure out what NOT to do. NIST is, and have been, for a very long time, simply a branch of the CIA/NSA – And their recommendations are explicitly created in such a way that they shall have access to your data, and your customers’ data. And as they open up backdoors into your data for themselves, they open up backdoors to your data to Chinese intelligence, Russian intelligence, and probably also a couple of intercontinental mafia organisations too in the process.

If you still believe that “bcrypt is secure” after having read this article, then I am sorry to confess, that my best security recommendation to your boss, and your customers, are literally to CHASE YOU OUT OF THEIR BUILDING WITH A STICK!!

Here is my “weakly hashed” password file – Feel free to try to crack it

When it comes to security, there is a lot of dogmatic beliefs out there. For instance, some guy recommended that I hash my passwords thousands of times. This was due to that if my hashing algorithm took one second to execute, a Rainbow/Dictionary attack brute forcing my passwords, for then to perform a lookup towards the hash values of my password file, would simply not work. These are considered “best practice” in our industry, and you can find entire sections at StackOverflow.Com arguing for this approach. In fact, there are even multiple libraries written for this sole purpose.

There are two problems with that approach. Both problems arises from the fact of that Phosphorus Five is implemented in C#. This implies that what’s a “slow hashing function” in C#, can easily be “lubricated” in assembly or C to become blistering fast! The second problem is that each iteration of hashing would require some heap memory, making the garbage collector kick in every n times a user tries to login – Rendering the system for all practical concerns USELESS!

So instead of relying upon “best practices” in regards to this problem, I asked myself what IS the problem. Well, the problem is that if an adversary gains physical access to your password file for some reasons, he can gain access to your passwords. The first time we “fixed” this problem, we fixed it by hashing our passwords, for then to never store our passwords in plain text. Then some jerk came around and figured he could use a Rainbow attack to brute force your password file. This works in such a way that he generates the hash value for every possible combination of characters that could in theory be used as a password. Generating every single hash value, for every single possible combination of characters in the alphabet up to 8 characters in length, requires surprisingly small amount of time, and can actually be done in seconds, with very few resources. Then he could simply take an existing hashed password, find its instance in his “Rainbow database”, and such find your password.

So we started salting our passwords, with a “per user” salt, to make sure even if an adversary manages to crack one of the passwords in your file, he still won’t be able to do a lookup for multiple occurrences in your password file, having the same hash value. In addition, we started “slow hashing”. Slow hashing implies that we hash thousands of times, resulting in that generating the hash for a single password combination, takes at least one second. Implying that creating this “dictionary” of pre-hashed values would require too much CPU time to be of practical usage. First of all, this implies adding a LOT of CPU overhead to your application. Secondly, what is “slow” for your server, is easily within the reach of a teenager with $10,000 to rent a server farm for some few hours, and some small amounts of C/Assembly knowledge. What is slow for your server and C#, is basically peanuts for a million servers running Assembly code. An organisation such as the NSA, CIA or the FSB (**PUN!**) could eat through your “slow hashing” in milliseconds, without even noticing a “blip” on their server farm’s CPU usage …!

So you must assume that the FSB basically knows all of your password. Because this has been “industry best practices” for a decade or so, and hence “all” developers have chosen this path – Including yours … 😉

So I figured that the “best practices” in these regards were arguably broken, and effectively useless. So instead of doing a “slow hash”, I decided to rip up the problem by its roots, and instead storing the password file encrypted. Just to prove hos secure this is, I challenge my readers to figure out my password. Here is my password file …

Content-Type: multipart/encrypted; boundary="=-LLyo/DkZazvC4JmU6M3Qag==";

Content-Type: application/pgp-encrypted
Content-Disposition: attachment
Content-Transfer-Encoding: 7bit

Version: 1

Content-Type: application/octet-stream
Content-Disposition: attachment




Now try to figure out my password … 😉

Good Luck!

This of course implies that you can literally store your password file, as I have done above, as plain text on your blog. Which of course makes it much easier to create backups of your password file, in addition to providing much better security than “slow hashing”.

Due to that “slow hashing” have been our industry’s “best practices” for a long time, an estimated guess would be that 99% of all web apps in this world have password implementations, that could easily be hacked by a teenager, with $10,000 to rent a server farm, and some above average C/ASM knowledge, in a couple of hours …

If that makes your paranoid, I happen to know the solution to your problem 😀

An independent Security Expert’s Code Review of Phosphorus Five

It’s really quite fascinating what you can get people to do for you for free, if you just “adequately motivate them”, and give them access to your source code. I’ve had several security experts from Reddit over the last couple of days literally scrutinising my code with a microscope, looking for security holes. Especially one guy truly emerged as a champion in this process, Mr. Cifize. Cifize was able to find several weaknesses in Phosphorus Five, all of which are now tightened. If I were to hire people professionally to do what Cifize did for Phosphorus Five for free, it would probably have costed me somewhere between 10,000 and 20,000 dollars. I am of course very grateful to Cifize for what he have done for Phosphorus Five. Thank you Cifize 🙂


Although my existing password file was already protected quite well, Cifize pointed out that a brute force rainbow attack, done by an adversary who already had access to the file, could “reverse engineer” its passwords, by brute force. Hence, my existing server-side salting hashing logic of my user’s passwords, could probably need some tightening up. Hence, I followed Cifize’s advice, and significantly tightened the way I store passwords.

The way I chose to do this, was to make sure I encrypt the password file, with a 4096bits RSA PGP key. This key is internally stored on the server encrypted with AES, which makes it even tighter. The password used to release the key from the GnuPG keyring, is stored in web.config. While the private PGP key used to decrypt the password file, is stored in GnuPG. Since GnuPG stores its keys outside of the filesystem that Phosphorus Five has access to, this makes it almost impossible to retrieve the PGP key, even for an adversary with full “root” access (P5 “root” access) to your server.

So an adversary will need literally almost complete access to your server, simply to be able to decrypt your password file. And even at that point, if he should somehow be able to decrypt your passwords, they’re still internally stored as server-side salted hashed values. On could argue that this is close to insanity and pure paranoia in regards to security – But when it comes to security, you should be paranoid! Better to add some 5-6 additional extra layers of security, than one too few … 😉

Additional security fixes

In addition Cifize was able to find a place in the backup methods which could in theory make an adversary able to perform an SQL injection attack. Although this could only occur if an adversary somehow was able to trick a “root” account to import a malicious CSV file, I still chose to fix it, to be on the safe side.

In addition, there were some minor issues with the “” script, that installs Phosphorus Five on a Linux machine. To make sure I install your server now in adequately paranoia mode, I’ve completely removed all HTTP headers that can positively identify details about the underlaying technology.

Since security is a constant ongoing process though, I would like to encourage all my readers to send me an email, if you should discover a hole. In addition, I have created a “honey pot server” myself, which you are welcome to try to hack. If you want to have a go at trying to hack Phosphorus Five, you can do so here.

Yet again I would like to give my thanks to Cifize, who have proven to be an invaluable asset in this process. Thank you Cifize 🙂

You can download the latest version here

What kind of security does Phosphorus Five implement?

This seems to be a question that for weird reasons hunts me. When senior developers are out of arguments to bash Phosphorus Five about, they end up attacking it claiming that it’s insecure.

First of all, the “” file, which installs Phosphorus Five on a Linux production server, will patch and update your Ubuntu Server. This makes sure that existing security holes are eliminated, and your Linux server is up to date. Then it will install a firewall, and shut down every single network port, except port 80, port 443, and port 22. This implies that the only traffic your web server will accept, is HTTP, HTTPS and SSH.

When it has done this, it will automatically suggest for you to install an SSL key pair on your server, and suggest that you redirect all insecure traffic (port 80) to the encrypted channel (port 443). Assuming you take its advice, this makes it impossible for an adversary to see what type of data you are sending back and forth between your client and your web server. In fact, every single bit transferred over the wire will be encrypted if you choose this path. This also eliminates that a “man in the middle” can steal your credential cookie, or perform what’s known as “session highjacking”. It prevents somebody else from “impersonating” your user, pretending to be you, to gain access to your server’s data.

Then it will install the very latest stable version of Mono, which actually is not that easy, since the default Ubuntu repository contains a version that is almost 5 years old. Needless to say, but the number of security holes fixed in later versions, are probably possible to count in the hundreds, if not thousands. So this further eliminates some 100-1,000 security holes, compared to the default Ubuntu repository.

Then it stops your web server from announcing what version your server is running. This is to prevent an adversary from gaining information about which software version your server is running, such as the Apache version, Linux version, etc. Then it will turn OFF the ability to override security settings using .conf files in folders. This is a major security concern, since it in theory allows an adversary with write access to your Apache web server folder to override your web server settings. This is globally turned OFF, to prevent a whole range of security holes, that otherwise might give an adversary control over your web server.

Then it prevents the serving of “.hl” files. This is strictly not necessary, but is an additional layer of security, preventing an adversary to see your web server’s source code, trying to find holes in them, to exploit these to gain access to your data. There is a general rule in security, which is “don’t say shit”, implying the less you communicate out about your server, the less information an adversary has to start out with, in order to crack into it. If you don’t know what system the server is even running, an adversary don’t even know where to start looking for ways to penetrate it.

As it installs your SSL key pair, it will even automatically renew your keys every 90 days. This is something which is often forgotten by a web server admin, effectively rendering your web server insecure after 90 days. The above are the stuff it does to your base operating system. In addition to these steps, comes the security implementations of Phosphorus Five itself.

The passwords for your users are stored as server salted, hashed values, in a protected file. Only a “root” account has read access to this file. However, even if an adversary somehow should gain access to this file, he’ll still not be able to see its passwords, because they will appear to be rubbish. This is done by first applying a “salt” to your password, for then to “hash” the combined value, and only then store this value as the “password” in the password file. Surprisingly many developers fails this step. Then when a user logs into the system, the same salt and hashing function is applied, and the password stored on disc, is compared to this “rubbish” value. So a password such as “Thi$I4M4P@sSwo4d” might become “C90obd+yAoJ2Lgy8YiSf2VLTbI041XRaxEzNrwwej6k=”. So even if an adversary gains access to your password file, which by itself should be impossible, he still wouldn’t be able to figure out your actual passwords. This “salt” is automatically generated might I add.

In addition to the above, Phosphorus Five implements “brute force password protection”. This is useful in case some adversary has a database of commonly used passwords, and is performing what’s known as a “dictionary attack”. At which point he might have a function that tries to log in thousands of times, every second, with different passwords. Implying at some point he’ll probably “get lucky”, and successfully log into your system, at which point he knows your password. Phosphorus Five prevents this by not allowing the same username to attempt to login more than once every 20 seconds. Implying a “dictionary brute force password attack”, will basically take a decade to be successful – Even with fairly simple passwords. This poses a problem though, which implies that if you are currently experiencing a dictionary attack, you can’t login from a new client. This is fixed in Phosphorus Five, by circumventing the “brute force protection”, if you have chose to “Remember me” on some specific client – Effectively (almost) eliminating this problem for all practical concerns, since most users will probably use the same device(s) as they use Phosphorus Five, where they are encouraged to “Remember me” by default. Yet again, the cookie stored on disc on the client remembering the user, is also hashed and salted.

The cryptography libraries Phosphorus Five uses, are developed in Australia. This is actually very important, since Australia does not have export regulations on cryptography, the was US has. For instance, some parts of the cryptography applied in Phosphorus Five, would purely from a legal perspective in the US, fall into the same category as exporting nuclear weapons from the US. However, since I am based in Cyprus, and the guys who are building my cryptography libraries are based in Australia, I am legally allowed to export these cryptography libraries, to any countries not on the “terrorist list” (North Korea, Iran, etc). So a US company cannot legally even come close to the strength of the cryptography functions I happen to have in Phosphorus Five, without risking (at least in theory) the **death penalty** for being in violation of US export laws. An example of how this further strengthens the security of Phosphorus Five, is how you can for instance easily create encryption key pairs as strong as 16,000 bits! Which is 4 times the strength that the NSA is using for their own sensitive emails and communication might I add. If you don’t trust the default RNG seeding function, you can also provide your own manual salt, which is **added** to the existing salt, and does not “replace” it …

In addition to this, I have consciously chosen to NOT support cryptography functions I find suspicious myself. An extremely good example of this is S/MIME, which I knew for a fact, years ago was inferior in regards to security to PGP. Of course nobody believed me when I told them then, might I add. S/MIME and PGP are two overlapping standards, arguably doing the same – And some few weeks ago my “suspicions” were confirmed. S/MIME contains several security holes that PGP does not contain. This can be found by reading up on the Efail security holes in regards to cryptography protocols.

For synchronous encryption, I am using AES. This is the synchronous encryption protocol preferred by the NSA, which the NSA encourages American public institutions to use for “extremely volatile information”. Phosphorus Five also support 256 bits AES. I might add that AES is also an encryption protocol that has been applied on several occasions by WikiLeaks, in addition to other intelligence organisations, such as FSB and Mossad. So this is a well proven encryption protocol, that is considered “impossible to break”, by all the security paranoid organisations on the planet today.

Then comes the problem of JavaScript. It’s often easy to implement security holes in JavaScript without even knowing it. This can be done by for instance adding business logic in your JavaScript that allows an adversary to gain knowledge about your server. However, eliminating JavaScript is impossible, since this would allow you to build only websites, the way they functioned in the 1990s. However, like all attack surfaces, the objective is to reduce their size as much as possible. So the size of your JavaScript hence becomes the thing you want to control. GMail contains 1.400Kb of JavaScript. Phosphorus Five contains 5.7Kb of JavaScript. Arguably making Phosphorus Five on the JavaScript parts 245 times “more secure”. This is an oversimplification I admit, but it’s still a measuring point, allowing you to quantify your application’s “attack surface”. Phosphorus Five will by default, simply never use more than 5.7Kbof JavaScript, unless you wrap some sort of JavaScript component.

In addition to the above points, I could probably mention security details for days, without even repeating myself. And although there exists no guarantees when it comes to security, and I would of course appreciate a (*serious*) security report, reporting holes in Phosphorus Five – I can confidently assure you that I doubt you have ever seen a more secure framework on this planet than Phosphorus Five. Basically …

I have no troubles what so ever suggesting Phosphorus Five to the MI6, CIA, NSA, FSB, Mossad, WikiLeaks, etc. In fact, Phosphorus Five could probably keep your Nuclear Rockets safe!

If you do have a serious concern about parts of Phosphorus Five, in relationship to security, you can send me a report using the schema below.