Verifier

The general architecture we have seen so far is composed of brokers and clients.

The client publishes into and subscribes to some topics, that represent branches, and receive the new commits.

Those commits are encrypted, and only the clients that possess the ReadCap, can decrypt them.

Decrypting the commits is the job of the Verifier, which is running inside the App.

The verifier takes the commits one by one, respecting the causal order that links them one to another (the earliest causal past is processed first), and applies the CRDT updates/patches that each commit contains.

Eventually, the full content of the document is reconstructed, and we call this the “materialized state“ of the document.

Inside the web-app, the Verifier has to replay again all the commits of a branch, one by one, every time it wants to have access to the content of a document (after a refresh of the page, or the first time the doc is opened, after each new login).

This is due to the fact that the webapp does not have a “User Storage” yet and it keeps the materialized state only in memory.

To the contrary the native apps have a User Storage and can save the materialized state to disk. When a new commit arrives and needs to be processed, the Verifier just opens the materialized state that was stored locally, and adds the last commit into it. It doesn’t need to reprocess all the commits like the webapp does.

The limitation on webapps is due to the fact that we use RocksDb to store the materialized state, and that we do not have yet a version of RocksDb that works in the browser, but this feature will be added in the future.

Also it should be noted that all the data stored in the User Storage is encrypted at rest. As you understood, the materialized state is a plaintext aggregation of all the commits. For this reason, it has to be encrypted again before being saved. The encryption used is not the same as the one for the commits. It is RocksDb itself that encrypts transparently all the records (thanks to a plugin that we implemented). The encryption key is stored in the wallet and is unique by device.

App API

The Verifier offers an API to the App, with which the App can read and write the data, and also query the RDF graph.

This API is present locally in :

  • the native Apps (based on Tauri)

  • a Tauri plugin that developers can use to create Tauri-based apps based on NextGraph Framework (not ready and not planned yet)

  • the Rust library (crate “nextgraph”)

  • the CLI

  • the ng-sdk-js library, for developers who want to develop their own apps with front-end frameworks like Svelte, React, Vue, Angular, etc…

  • the ng-sdk-node library (npm package called “nextgraph”), that can be used in nodeJS or Deno in order to access the data of documents in backend services.

The 2 Javascript libraries do not have a User Storage so they only support in-memory Verifiers.

As the feature of the “User Storage for Web” will take some time to be coded, so we offer another way to solve the problem of volatile materialized state in JS.

There is also the idea of having a full fledged Verifier running inside nodeJS. This would use the NAPI-RS system which compiles the Rust code of the verifier, together with the RocksDb code, and make it a binary library compatible with nodeJS that would run inside the nodeJS process. This also will take some time to be coded.

Instead for both cases (JS in web and in nodeJS) we offer the App API that connects to a remote Verifier.

The JS libraries can then connect to such remote Verifier and use the full set of functionalities, without a need to replay all the commits at every load.

Where can we find remote verifiers ? in ngd, the daemon of NextGraph.

Usually ngd only acts as a broker, but it can be configured and used as a Verifier too.

in which use cases is it useful ?

  • when the end-user doesn’t have a supported platform where to install the native app. By example, a workstation running OpenBSD or FreeBSD, doesn’t have a native app to download (and cannot be compiled neither as Tauri doesn’t support such platform). In this case, the end-user has to launch a local ngd, and open the webapp from their browser (http://localhost:1440). The verifier will run remotely, inside ngd (that isn’t very far, it is on the same machine). Because it is on localhost or in a private LAN, we do allow the webapp to be served on http (without TLS) and the websocket is also working well without TLS. But this doesn’t work anymore if the public IP of the ngd server is used.

  • when a nodeJS service needs access to the documents and does not want to use the in-memory Verifier, because it needs quick access (like a headless CMS, Astro, an AI service like jan.ai, or a SPARQL REST endpoint, an LDP endpoint, etc..) then in this case, an ngd instance has to run in the same machine as the nodeJS process, or in the same LAN network (Docker network by example).

  • in the headless mode, when a server is using ngd as a quadstore/document store and the full credentials of the user identity has been delegated to that server. This is the case for ActivityPods, by example.

  • on the SaaS/cloud of NextGraph, we run some ngd brokers that normally would not have any verifier. But in some cases, at the request of the end-user, we can run some verifiers that have limited access to some documents or stores of the user. If they want to serve their data as REST/HTTP endpoints, by example. The end-user will have to grant access about those resources to this remote verifier, by providing their DID capabilities. A Verifier can see in clear all the data that it manipulates, so obviously, users have to be careful where they run a Verifier, and to whom they give the capabilities.

What is important to understand is that the Verifier needs to run in a trusted environment because it holds the ReadCaps of the documents it is going to open, and in some cases, it even holds the full credentials of the User Identity and has access to the whole set of documents in all stores of the user.

The verifier is the terminal point where the E2EE ends. After that, all the AppProtocol is dealing with plaintext data.

For this reason, the Verifier should normally only run in a computer or device that is owned and controlled by the end-user.

Remote Verifier

  • A specific user wants to run a remote verifier on the server instead of running their verifier locally. This is the case for end-users on platforms that are not supported by Tauri which powers all the native apps. The end-user on those platforms has to run a local ngd daemon instead, and access the app in their browser of choice, at the url http://localhost:1440 . Here the breaking of E2EE is acceptable, as the decrypted data will reside locally, on the machine of the user. As the web app cannot save decrypted user data yet, it has to reprocess all the encrypted commits at every load. In order to avoid this, running a remote verifier on the local ngd is a solution, as the ngd can save the decrypted user’s data locally, if the user gave permission for it. The API for that use case is session_start_remote and the credentials (usually stored in the user’s wallet) are extracted from the wallet and passed to ngd. The rest of the “session APIs” can be used in the same manner as with a local Verifier. This present JS library connects to the server transparently and opens a RemoteVerifier there. The remote session can be detached, which means that even after the session is closed, or when the client disconnects from ngd, the Verifier still runs in the daemon. This “detached” feature is useful if we want some automatic actions that only the Verifier can do, be performed in the background (signing by example, is a background task).

  • The second use case is what we call a Headless server (because it doesn’t have any wallets connecting to it). It departs a bit from the general architecture of NextGraph, as it is meant for backward compatibility with the web 2.0 federation, based on domain names and without E2EE. This mode of operation allows users to delegate all their trust to the server. In the future, we will provide the possibility to delegate access only to some parts of the User’s data. In Headless mode, the server can be used in a traditional federated way, where the server can see the user’s data in clear, and act accordingly. We have in mind here to offer bridges to existing federated protocols like ActivityPub and Solid (via the project ActivityPods) at first, and later add other protocols like ATproto, Nostr, XMPP, and even SMTP ! Any web 2.0 federated protocol could be bridged. At the same time, the bridging ngd server would still be a fully-fledged ngd daemon, thus offering all the advantages of NextGraph to its users, who could decide to port their data somewhere else, restrict the access of the server to their own data, interact and collaborate with other users (of the federation or of the whole NextGraph network) in a secure and private way, and use the local-first NG app and access their own data offline.

All of those use cases are handled with the present nodeJS library, using the API described here.

Client Protocol

The Verifier talks to the Broker with the ClientProtocol, and receives the encrypted commits via this API. It also subscribes and publishes to the Pub/Sub with that API.

Then, it exposes the AppProtocol to the Application level, this is what is used to access and modify the data.

Sometimes the Verifier and the Broker are on the same machine, in the same process, so they use the LocalTransport which is not even using the network interface. That’s the beauty of the code of NextGraph. It has been thought from the beginning with many use cases in mind.

Sometimes the Verifier and the App are in the same process, sometimes they need a websocket between them. But all of this are implementation details. For the developers, the same API is available everywhere, in nodeJS, in front-end Javascript, in Rust, and similarly, as commands in the CLI, regardless of where the Verifier and the Broker are actually located.

In some cases, a broker (ngd) will run. Let’s say on localhost or within a LAN network, and will not be directly connected to the core network. This can happen in the following schema. This is called a Server Broker, and it doesn’t join the core network. Instead, it needs to establish a connection to a CoreBroker that will join the core network on its behalf. It will use the ClientProtocol for that, in a special way called “Forwarding”, as it will forward all ClientProtocol request coming from the Verifier(s), to another broker called the CoreBroker. It will keep local copies of the events, and manage a local table of pub/sub subscriptions, but will not join overlays by itself. This will be delegated to the CoreBroker(s) it connects to.

This Forwarding Client Protocol is not coded yet (but it is just an add-on to the ClientProtocol).

Also, the Relay/Tunnel feature is not finished. But very few tasks remain in order to have it running.

Finally, the CoreProtocol, between Core brokers, has not been coded yet, and will need more work. It implements LoCaPs algorithm for guaranteeing partial causal order of delivery of the events in the pub/sub, while minimizing the need for direct connectivity, as only one stable path within the core network is needed between 2 core brokers, in order to guarantee correct delivery.

More on the Client Protocol here