Qubic bxid archival service

Written by


Jan 6, 2024

Qubic is very unique and has characteristics that make it difficult to directly interface it to much of existing crypto infrastructure. The goal of this proposal is to define a unique hash, bxid, that represents a confirmed and instantly final proof that value was transferred. An API indexed by bxid would allow for much easier integration of Qubic to systems that expect traditional crypto confirmed txid.

The bxid, Balance Transfer ID, can be calculated locally before it is included in a tick. This allows wallets to display the bxid to users and also to query an API service for the bxid. If after the specified tick, the bxid is not there, that would mean that the transaction failed to transfer value.

For the normal case of QU transfer, bxid is defined by the K12 hash of epoch + tick + srcpubkey + destpubkey + amount, in a byte format that matches the logfile entry. For other types of transactions the bxid can follow the same convention by hashing the logfile entry directly. However, make sure to skip past the date fields as they cannot be known at the time of tx creation.

The above works for normal transfers due to the restriction of one normal transfer per tick from an address. There is the issue of SC balance changes that does not have a transaction and the source is just the SC number with no limits on the number of times the destination can get paid per tick. This will lead to the same bxid as all the fields will be the same. After much discussion in discord it became clear that there will need to be a special case handling for these transactionless balance changes to maintain the uniqueness of bxid per balance change. My idea for this is to calculate SC balance changes with amount of 0 so it can be calculated. Then the query for that bxid would return the amount as the sum of all the balance changes for that destination in that tick. Granted usually there would only be one, but this way an arbitrary number of them can be handled.

Now there are two cases for bxid, the main case is when as the log entries for the specified tick are processed, it is added to the database and then the wallet that submitted the transaction queries the API to see if the bxid exists. If it does that means the transaction succeeded, if it does not exist that means the transaction failed to transfer balance, even if it was included in the tick data.

One thing to be careful about is that the log processing will not be exactly in realtime, so the API service should return the latest tick seen in the log processing. We can know that if tick N has been processed, no new log entries for ticks less than N will arrive. So the absence of the bxid after the specified tick has been processed can be used as evidence of transaction failure.

On the issue of cryptographically proving the bxid information provided by the API service it again splits into two cases. The easy case is when the bxid exists, in such a case the matching txid can be found in the included tick and using the usual methods validate the transaction is valid. The second case where the txid was included in the tick but the bxid is absent will take a bit more work to cryptographically validate.

First the entity info needs to be validated to get the current balance. Then all the bxid for the epoch for the specific address summed and subtracted based on source or destination. This net change then added to the beginning of epoch spectrum file will generate a bxid based balance. If that matches the balance using entity data that was cryptographically validated we can see that the bxid arrived at the same balance.

Now we can move onto the implementation side of the bxid archival service. Once this is running and publicly available it will dramatically simplify the integration of external services.

I made a working proof of concept at qubic-cli/bxid.cpp at main · Qsilver97/qubic-cli (github.com)

It supports two functions, creating a bxid from transaction details and the generation of JSON output from the qubic logfile entries. Specific fullnodes need to be configured to generate logfiles and provide service to specific clients protected by passcodes. My release allows anybody with access to logfiles to create a bxid archival service, and of course to extend it with whatever other JSON fields desired.

Example JSON created from a logfile entry:

{ “index” : { “_index”: “bxid”, “_id” : “397e947847ada93de80907d88a835419fb532b3ca1fd68b3c95ebab11cd24190” } }{“utime”:”1707059413", “epoch”:”90", “tick”:”11867469", “type”:”1", “src”:”LZLDOEIBQWIUGGMZGOISLOAACDGAFVAMAYXSSJMLQBHSHWDBPMSDFTGAYRMN”, “dest”:”QHQPMJVNGZJGZDSQREFXHHAZFYPBIYDOTFAOTTWGYCWGTIRNGBVMKBGGNDDA”, “amount”: “1521139” }

These are the pair of lines needed as input by charmed opensearch system. That can be installed by following Charmhub

After that we magically get a REST API service! OpenSearch Documentation

The best part is that a simple log processing loop will create the archival service with the above REST interface that allows querying by any field, bxid, source, destination, even tick or epoch using curl.

curl cacert demo-ca.pem -XGET https://<username>:<password>@<ipaddr>:9200/bxid/_doc/397e947847ada93de80907d88a835419fb532b3ca1fd68b3c95ebab11cd24190{“_index”: “bxid”,“_id”: “397e947847ada93de80907d88a835419fb532b3ca1fd68b3c95ebab11cd24190”,“_version”: 2,“_seq_no”: 32754,“_primary_term”: 1,“found”: true,“_source”: {“utime”: “1707059413”,“epoch”: “90”,“tick”: “11867469”,“type”: “1”,“src”: “LZLDOEIBQWIUGGMZGOISLOAACDGAFVAMAYXSSJMLQBHSHWDBPMSDFTGAYRMN”,“dest”: “QHQPMJVNGZJGZDSQREFXHHAZFYPBIYDOTFAOTTWGYCWGTIRNGBVMKBGGNDDA”,“amount”: “1521139”}}

© 2024 Qubic. All Rights Reserved.


© 2024 Qubic. All Rights Reserved.


© 2024 Qubic. All Rights Reserved.