article archive

Matrix on Mobile with OmniLedger login

For some time it was impossible to use the c4dt matrix chat on the mobile phones, as RiotX didn’t work correctly with the OmniLedger login. Now you can use the Element app together with our login to have matrix chat on your phone.

TL;DR

Of course you just want to know how it works:

  • Add your phone as a device to your OmniLedger account
    • On your desktop, go to OmniLedger devices
    • Click on Add Device and chose a name
    • Scan the QRcode with your phone and visit the page
  • Install Element (Android / iOS) and start it
    • On the page Select a Server, chose Other and enter matrix.c4dt.org as address
    • Chose Continue with SSO and in the web-app confirm with Login and I trust this address

That’s it, you now have the matrix-chat on your mobile phone!

Why it didn’t work before

The problem is that we configured matrix to use our SingleSignOn login which does not rely on passwords, but uses a signature of a private key to authenticate. This private key is stored in the mobile phone’s browser. However, the previous matrix app, RiotX, used an internal browser, which did not have access to this private key. So the SSO failed and couldn’t log in.

The new Element now correctly opens the mobile phone’s browser, and thus can use the private key to generate a transaction on the blockchain to prove that you have access.

Don’t have access?

If you’re interested in testing it, you can contact Linus and ask for login-credentials.

Exploring golang with a REPL

It is customary for users familiar with a command-line shell or dynamic languages such as Python to work with a REPL, or Read-Eval-Print-Loop. This kind of interface is very powerful for common exploratory tasks: quickly interact with some data or object, prototype an idea, or learn a particular functionality or library.

Unfortunately, static languages such as Go do not provide such an interface, and the common way to perform those tasks is to fire up an editor and write a small program. Granted, the fast edit-compile-run cycle of Go makes it very easy, but it is still not ideal. Another common Go solution is to use the Go Playground, which still requires writing a full program before executing it.

Fortunately, some projects are trying to close this gap, and gore is one of them. Once installed, just type gore and you are brought into an interactive shell waiting for your input. You can then enter Go statements, import packages, view documentation, examine variables, and receive an immediate feedback, all the while benefiting from line editing and code completion. Furthermore, since behind the scenes gore works by generating a program from your input, compiling it, running it and presenting the result, you can at any time view and save the generated program, ready to be used as a basis for your project.

A similar idea but with a different approach is yaegi. This project is more of a Go interpreter, which can be used as an interactive shell, but also as an embedded interpreter in regular Go programs (think eval()-ing Go code at runtime), or as a script interpreter (to be used in the shebang line).

These tools definitely blur the distance between static and dynamic languages, and can bring significant new tricks up your sleeve.

golang code analysis

To ensure good code quality, we are looking this week at some golang (one of the language used for many of our project at C4DT) helping us to do so. Some idea where taken from the Awesome Go List, a good reference for everything related to golang.

First, we show some “classic” golang tools used

  • gofmt: check that formatting follow the golang standard
    • the -s flag is quite nice also, as it simplify code thus helping to write concise code
  • go vet: static code analyzer, check for useless assignment, unreachable code and some other standard mistakes
  • golint: like gofmt but different, the latter can rewrite the files as needed but the differences shown by golint usually can’t be made automatically

And now, the newcomers

  • staticcheck: the (most?) powerful golang static analyzer out there
    • the only one actually checking for Deprecated fields
    • simplify/idiomaticise code
    • has a well documented list of rules, always know why it ask for changes
  • go-mod-outdated: check for outdated dependencies
    • you might want to run it with -direct -update to only have to direct and updatable
    • avoid much of the hassle of keeping go.{mod,sum} up to date

Of course, all these can easily be added to a Continuous Integration system, failing the build if any show an errors.

Crypto in Angular and Nativescript

Using typescript and npm has two big advantages: there are many modules available, and it runs in many environments. Unfortunately this is also one of the biggest challenges. Because you can configure every aspect of the system, you can break it very easily.

At C4DT we often use the crypto library, or the crypto-browserify counterpart. This is a very difficult module to get right in all three environments. Node has native support. On the other hand, the browser offers an incompatible crypto environment. And finally, nativescript does something entirely different.

To test and understand what is the best way to do it, I created https://github.com/c4dt/crypto-ts. It states the problem and offers a solution. There are other solutions, but our engineering team thinks this is the easiest.

Comments are welcome in the issue section.

Kubernetes

Want to automate pretty much every aspect of a deployed application lifespan? Having issue with reliability using docker-compose? Doesn’t want to care/handle how to expose this application to the outside? Want to have a reproducible infrastructure of containers deployed in a fast and resilient manner? Then Kubernetes is for you. In a word, it’s a container orchestrator aimed at data-center.

First, you need to deploy a master node, containing the configuration and is the main API server, then a few nodes connecting to it; of course, you can have multiple master, for high-availability. There is much information on kubernetes.io on how to get started.

Now that it is up and running, you have to choose between two mode of development: either a file based one encoding the wanted state of the system, or a command line based one where you evolve the running system until you’re happy with it then dump/clean the configuration to get to the file-based approach. I found that devops prefer the latter, here I’ll go with the former.

Let’s take an example: I’m currently working on deploying Drynx, a decentralized and secure system for machine learning on distributed datasets, to show its power via our demonstrator. To have a better understanding of what Drynx is computing, we want to actually show theses datasets in our web interface, so we want to provide an endpoint to retrieve them, ie a nginx providing some files via HTTP. In kubernetes’ terms, it means having a deployment for some nginx pod, providing a service.

  • pod: an instance of an application, running inside a container, which can be crashed/killed for any reason, an updated pod, killed by the system, some maintenance needed on the host, …
    • so if you want state to be stored or shared between containers, you need to use a volume
      • in our case, we actually have the datasets shared between nginx and the Drynx nodes
    • one quite useful feature is that you can use initContainers, which are run before the main containers are run, it’s usually where you generate the configuration
      • the way to transfer state between the init containers and the normal ones is to use an emptyDir, which lives as long as the pod does (so it survives container restart)
  • deployment: handle pods, how many you need at any point, how to transition to a new application version; it will (re)start/kill its managed pods to get to the state you want
    • it’s the main type of file to write, as it contains a template for the pods to create; I’ve yet to find a common use case for writing a pod file directly
  • service: entrypoint for the HTTP calls; as the pods can be created at any IP and can be quite short lived, we want to have a stable address to connect to, also, routable from the outside of the datacenter usually

So with that, you have an application which can crash and be upgraded without caring anymore about how it has to be exposed to the outside or how to handle running connection. You have a clear view of what is running, how it is available and how well it is running, without caring about little detail like physical location or hardware.