-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add encodeExtendedJson
and decodeExtendedJson
with support for BigInts, TypesArrays, and Dates
#108
base: master
Are you sure you want to change the base?
Conversation
d670d17
to
5ad6520
Compare
Thanks for this PR @mainnet-pat! Sorry to keep you waiting here. I think it makes sense to formalize this sort of format a bit more, maybe calling it (within Libauth) "extended JSON" and providing I'd previously (~5 years ago 🤯) avoided including this re-encoding behavior because the So: I'd love to pull this in! I think I think the existing string pattern approach ( If you (and/or @jimtendo, who just reminded me about this issue) are interested in expanding this PR to break out |
Whatever it is in the end, please I beg that it be explicit about I think this does that already, but just putting it out there hopefully as a red line that we don't cross. |
I never quite understood the need for a special @mainnet-pat is there an explanation somewhere for wanting that? |
This approach will generally always be less efficient than manually encoding/decoding specific fields/paths to their desired types as it requires traversing every field in the JSON tree. But, the trade-off here is that it greatly reduces code-complexity in the sense that we do not have to double-up on interfaces. So the best lens to view this from is probably as a way to decrease complexity, not as a way to increase efficiency. For example, imagine we have the following interface which we need to transport over the network as JSON: interface AnyHedgeContract {
makerPayoutLockingBytecode: Uint8Array;
takerPayoutLockingBytecode: Uint8Array;
makerPublicKey: Uint8Array;
takerPublicKey: Uint8Array;
hedgeSats: bigint;
longSats: bigint;
// ... other fields
} If we wanted to type this correctly for transport, we would basically need to duplicate it and replace the interface AnyHedgeContractJSON {
makerPayoutLockingBytecode: string;
takerPayoutLockingBytecode: string;
makerPublicKey: string;
takerPublicKey: string;
hedgeSats: string;
longSats: string;
// ... other fields
} If we were just doing the above, this is probably still quite easily manageable (the interfaces above aren't too complex). But, if we were to do this for every interface that needs to be transported (and those with arrays of data), we wind up with a lot of code that is (near) duplicated and also a lot of additional code to encode/decode to/from these transport interfaces. Having a
It is explicit. It checks the types being parsed in such that: {
// ...
takerPublicKey: Uint8Array;
hedgeSats: bigint;
// ...
} ... will become JSON Payload: {
takerPublicKey: "<Uint8Array: HEX_DATA>", // Because type in Object is Uint8Array
hedgeSats: "<bigint: 123456789123456789", // Because type in Object is bigint
} (There's no clever "does this number need to be a bigint"... it just looks at what the native type itself is). |
Thanks for the explanation Jim. I would really like to split the discussion - I'm not making any statement about the bigint part. That is strictly necessary. I'm saying that by pushing a custom binary format for Regarding hex vs. whatever encoding, it will be more efficient or less efficient or whatever - I think we can agree that doesn't really matter when the difference is going to be marginal either way. Also thanks for showing that it's explicit 🙏. Hope it stays that way. |
@emergent-reasons I agree that trying to pass In the JS context, Libauth requires These days (as of Libauth v2), the Libauth formats that are intended for significant external consumption are all represented with JSON-safe data types (there's lots of hex-encoded strings and string-encoded BigInts in the In the case of these That said, I don't expect to use |
encodeExtendedJson
and decodeExtendedJson
with support for BigInt, TypesArrays, and Dates
encodeExtendedJson
and decodeExtendedJson
with support for BigInt, TypesArrays, and DatesencodeExtendedJson
and decodeExtendedJson
with support for BigInts, TypesArrays, and Dates
@bitjson would you be open to a tree-traversing function to go along with this also? There are some middleware use-cases where this would be useful. To give an example, if one was to want to do something like this as ExpressJS middleware: // Use express.json() to handle JSON payloads.
// NOTE: We could rewrite the functionality contained in here, but it's high effort because it also supports things like Gzipped payloads.
expressInstance.use(express.json());
// Add middleware to convert to extended JSON.
expressInstance.use(extendedJson()); ... we would have to do something similar to the following where we double-up on JSON parsing (e.g. string to object then back to string/object to string and then back to object): (Have posted whole example in case useful for others and highlighted the portions I'm referring to as "PROBLEM POINT") export const extendedJson = function(): Array<(req: Request, res: Response, next: NextFunction) => void>
{
// TODO: This is currently VERY inefficient in that we are doubling up on stringify and parse operations.
// We either want to replace with a custom tree-traversal for objects or implement express.json() ourselves.
// Define Middleware function that decodes extended JSON (for requests).
const decodeExtendedJsonMiddleware = function(req: Request, _res: Response, next: NextFunction): void
{
// Decode only if there is a request body.
if(req.body)
{
// PROBLEM POINT
req.body = decodeExtendedJson(JSON.stringify(req.body));
}
// Move forward to next middleware in our route.
next();
};
// Define Middleware function that encodes extended JSON (for responses).
const encodeExtendedJsonMiddleware = function(_req: Request, res: Response, next: NextFunction): void
{
// Store our original send method so that we can invoke it later.
const originalSend = res.send;
// Define updated send method that supports Extended JSON.
const sendWithExtendedJsonSupport = function(data: any): Response<any, Record<string, any>>
{
// Check if the data is an object
if(typeof data === 'object' || typeof data === 'bigint')
{
// Convert the object to our Extended JSON format.
// PROBLEM POINT
const dataAsExtJSON = JSON.parse(encodeExtendedJson(data));
// Send the Extended JSON format
return originalSend.call(res, dataAsExtJSON);
}
// If the data is not an object, just send it as is.
return originalSend.call(res, data);
};
// Override the send function
res.send = sendWithExtendedJsonSupport;
// Move forward to next middleware in our route.
next();
};
// Return an array of our middleware for handling Extended JSON in a) our requests and b) our responses.
return [ decodeExtendedJsonMiddleware, encodeExtendedJsonMiddleware ];
}; If we added a tree-traversal function, we could optionally avoid that inefficient doubling up. This applies to most libraries that would cast from string -> object or object -> string too (e.g. PeerJS). (Have pasted whole code there just in case it's useful for someone else). |
Certainly! As long as it has full test coverage (so we can maintain it easily), no problem exposing additional related utilities. If I'm understanding correctly, you're looking for a I'd suggest choosing a similar name with a clarifying prefix, maybe |
Yep, exactly!
Sounds perfect, thank you! |
Jim is putting a placeholder for this feature into some code we are working on. Looking at the details, one thing jumped out at me - I recommend not using a facsimile of javascript bigint naming and notation. It sets up an expectation that the format is just a wrapped up javascript bigint which I doubt is a design goal due to additional complexity. Is it a design goal? For example, For example, if javascript bigint notation is a design goal, then every language will need to implement a complete version of javascript bigint including all possible representations. I'd recommend something neutral and narrowly defined for purpose such as
or something similar. Actually using |
For me, the main goal would just be landing on something that the ecosystem can standardize on. But I do tend to agree with @emergent-reasons that I think it may be better to have these encoded in a JS-agnostic manner (e.g.
I'm pretty pathetic with RegEx, but it'd be a nice-to-have if the expressions were pretty straightforward for use with JSON Schema. Some data structures become very cumbersome (e.g. doubling up on interfaces) to manually convert to standard JSON for transport (e.g. some of those I work with have LOTS of Uint8Array and int fields) so being able to easily encode/decode these fields with a one-liner for transport is probably the biggest selling point for me when it comes to these functions. |
e72ba49
to
60aec23
Compare
This is a counterparty to libauth's stringify function.
It currently supports parsing back
Uint8Array
andBigInt
, not sure how to go aboutSymbol
andFunction