sourcecred

0.11.2

An initiative (src) DEPENDS ON (verb) a dependency (dst). Forward: depending on something shows the value of the dependency. Backward: having a dependency does not endorse the iniative, but does flow some cred to incentivize reuse and attribution.

dependsOnEdgeType

Type: EdgeType

An initiative (src) REFERENCES (verb) a reference (dst). Forward: referencing from an initiative shows the value of the reference. But we assume a reference likely needs some refinement to be used by the initiative, so it flows less cred than to a dependency. Backward: having reference material does not endorse the iniative, but does flow some cred to incentivize using existing research and attribution.

referencesEdgeType

Type: EdgeType

A contribution (src) CONTRIBUTES TO (verb) an initiative (dst). Forward: a contribution towards the initiative is also an endorsement of the value of that initiative. Backward: an initiative in large part consists of it's contributions, so the value of an initiative caries over to it's contributions.

contributesToEdgeType

Type: EdgeType

A contributor (src) CONTRIBUTES TO (verb) an entry node (dst). Forward: a contributor towards the entry node has a small endorsement of that contribution. Though a high weight would risk contributors' own cred gets "lost to alpha". Backward: flows the value of the contribution to the contributors.

contributesToEntryEdgeType

Type: EdgeType

A user (src) CHAMPIONS (verb) an initiative (dst). Meaning forward is the user claiming and committing they will champion an initiative. And backward is the return of cred based on the completion and succesful championing of the ininiative.

Forward: a user championing an iniative is also an endorsement of the value of that initiative. Backward: an initiative likely received a lot of ongoing support from it's champion. We're assuming this is more support than individual contributions.

championsEdgeType

Type: EdgeType

Return the address corresponding to a GitHub login.

If the login is considered a bot, then a bot address is returned. Otherwise, a regular user address is returned. The method does not attempt to find out whether the address should actually be an organization address, as we don't yet handle organization addresses.

Note: The signature will need to be refactored when we make the list of bots a configuration option rather than a hardcoded constant.

loginAddress(username: string): RawAddress
Parameters
username (string)
Returns
RawAddress

parseAddress will accept any 20-byte hexadecimal ethereum address encoded as a string, optionally prefixed with 0x.

Per EIP-55 (https://eips.ethereum.org/EIPS/eip-55), parseAddress throws if the provided string is mixed-case but not checksum-encoded. All valid addresses in lower- and upper-case format will not throw.

For consistency, all valid addresses are converted and returned in mixed-case form with the 0x prefix included

valid formats: "2Ccc7cD913677553766873483ed9eEDdB77A0Bb0" "0x2Ccc7cD913677553766873483ed9eEDdB77A0Bb0" "0X2CCC7CD913677553766873483ED9EEDDB77A0BB0" "0x2ccc7cd913677553766873483ed9eeddb77a0bb0"

invalid formats: "0x2ccc7cD913677553766873483ed9eEDdB77A0Bb0" "2ccc7cD913677553766873483ed9eEDdB77A0Bb0"

parseAddress(s: string): EthAddress
Parameters
s (string)
Returns
EthAddress

This module defines NodeTypes and EdgeTypes, both of which are data structures containing shared metadata that describes many nodes or edges in the graph. Nodes can be "members" of zero or more NodeTypes, and edges can be "members" of zero or more EdgeTypes. Membership is determined based on the type's prefix, which is an address. A node or edge is considered a member of a type if that type's prefix is a prefix of the node's address.

To make this more concrete, let's consider a specific example. Suppose we define the following nodes:

const pullNode = NodeAddress.fromParts(["github", "pull", "repo", "34"]); const commitNode = NodeAddress.fromParts(["git", "commit", "e1337"]); const pullType: NodeType = { name: "Pull request", prefix: NodeAddress.fromParts(["github", "pull"]), // ... more properties as required }; const githubType: NodeType = { name: "GitHub node", prefix: NodeAddress.fromParts(["github"]) };

Then the pullNode is considered a member of the pullType and githubType, while the commitNode is not a member of either type.

The concept of a "type" is useful to SourceCred because it let's us express that groups of nodes are conceptually related, and that we should treat them similarly. Most saliently, we use types to assign default weights to groups of nodes, and to aggregate them for better UI organization. The fact that the SourceCred UI can group all pull requests together, and assign a default weight to all of them, is possible because the GitHub plugin defines a pull request node type.

While a node or edge can theoretically be a member of multiple types, in practice we generally treat the node or edge as though it is only a member of its most specific type. In the example above, we would treat any individual pull request as though it is only a member of the pull request node type. That may change in the future. In general, the type system is not wholly finalized; when it does become finalized, we will likely move it into src/core. See #710 for context.

NodeType
Properties
name (string)
pluralName (string)
prefix (NodeAddressT)
defaultWeight (number)
description (string)

Represents a "Type" of node in the graph. See the module docstring for context.

NodeType
Properties
name (string)
pluralName (string)
prefix (NodeAddressT)
defaultWeight (number)
description (string)

Represents a "Type" of edge in the graph. See the module docstring for context.

EdgeType
Properties
forwardName (string)
backwardName (string)
defaultWeight (EdgeWeight)
prefix (EdgeAddressT)
description (string)

An IdentityProposal allows a plugin to report a participant identity, for inclusion in the ledger.

The proposal has an alias, which includes a node address for the identity. If some account already has that address, then the proposal may be ignored.

If no account has that address, then the proposal will be added as a new identity in the ledger.

The proposal has a proposed name for the identity, and a name for the plugin. The plugin name will be used as a discriminator if there's already a different identity with that name.

If the name and discriminator combo is taken, then a further numeric discriminator will be added.

When the identity is created, it will have its own identity address, per usual, and then the alias will be added. We give the plugin control over the full alias because aliases include helpful descriptions which are shown in the UI, and the plugin should choose an appropriate description.

IdentityProposal
Properties
name (Name)
pluginName (Name)
alias (Alias)
type (IdentityType)

Given a Ledger and an IdentityProposal, ensure that some Ledger account exists for the proposed identity and return the identity ID.

If there is already an account matching the node address of the proposal's alias, then the ledger is unchanged.

Otherwise, a new account will be created per the semantics of the IdentityProposal type.

ensureIdentityExists(ledger: Ledger, proposal: IdentityProposal): IdentityId
Parameters
ledger (Ledger)
proposal (IdentityProposal)
Returns
IdentityId

A dynamic plugin that allows 3rd parties to rapidly pipe data into an instance.

The External plugin can be used multiple times, because it simply uses the PluginId pattern "external/X" where X can be any name (but preferably an agreed upon name between the 3rd-party software and the instance maintainer).

The External plugin loads its graph and optionally its declaration and identityProposals from either:

  1. the plugin config folder on disk
  • To use this method, simply place the files into the

config/plugins/external/X folder. 2. a base url that statically serves the files

  • To use this method, simply serve the files statically with cross-origin

enabled in the same directory, and add a config.json file in the instance's config/plugins/external/X folder with form: { "Url": "https://www.myhost.com/path/to/directory" }

Supported files for either method are:

  1. graph.json/graph.json.gzip (required) - works whether or not it is

compressed using our library 2. declaration.json (optional) - if omitted, a default declaration with minimal node/edge types is used, but also graphs don't have to adhere to the declaration if they don't desire to be configured using our Weight Configuration UI. 3. identityProposals.json (optional) - if omitted, no identities are proposed

new ExternalPlugin(options: {pluginId: PluginId, storage: DataStorage?, config: ExternalPluginConfig?})
Parameters
options ({pluginId: PluginId, storage: DataStorage?, config: ExternalPluginConfig?})

A way for 3rd-party developers to easily test their External Plugin. After generating a WeightedGraph, a Declaration, and IdentityProposals, a developer could instantiate a ConstructorPlugin and pass it into our graph API using our library in environments such as Observable. This is a prerequisite for testing using credrank because of the IdentityProposals. Once satisfied with the result, they can serve their files for consumption by an ExternalPlugin configuration.

new ConstructorPlugin(options: {weightedGraph: WeightedGraph?, identityProposals: $ReadOnlyArray<IdentityProposal>?, declaration: PluginDeclaration?, pluginId: PluginId?})
Parameters
options ({weightedGraph: WeightedGraph?, identityProposals: $ReadOnlyArray<IdentityProposal>?, declaration: PluginDeclaration?, pluginId: PluginId?})

The ZipStorage class composes with other WritableDataStorage implementations. It compresses values before passing them into the underlying baseStorage implementation, and decompresses them upon receit from baseStorage.

new ZipStorage(baseStorage: DataStorage)
Parameters
baseStorage (DataStorage)

This class serves as a simple wrapper for http GET requests using fetch. If an empty string is passed as the base, the base will be interpretted as '.'

new NetworkStorage(base: string)
Parameters
base (string)
Instance Members
get(resource)

A Name is an identity name which has the following properties:

  • It consists of alphanumeric ASCII and of dashes, which makes it suitable for including in urls (so we can give each contributor a hardcoded URL showing their contributions, Cred, and Grain).
  • It is unique within an instance. Also, no two identities may have names that both have the same lowercase representation.
  • It's chosen by (and changeable by) the owner of the identity.
Name

Parse a Name from a string.

Throws an error if the Name is invalid.

nameFromString(name: string): Name
Parameters
name (string)
Returns
Name

Attempt to coerce a string into a valid name, by replacing invalid characters like _ or # with hyphens.

This can still error, if given a very long string or the empty string, it will fail rather than try to change the name length.

coerce(name: string): Name
Parameters
name (string)
Returns
Name

Assert at runtime that the provided address is actually a valid address of this kind, throwing an error if it is not. If what is provided, it will be included in the error message.

assertValid(address: Address, what: string?): void
Parameters
address (Address)
what (string?)
Returns
void

Assert at runtime that the provided array is a valid array of address parts (i.e., a valid input to fromParts), throwing an error if it is not. If what is provided, it will be included in the error message.

assertValidParts(parts: $ReadOnlyArray<string>, what: string?): void
Parameters
parts ($ReadOnlyArray<string>)
what (string?)
Returns
void

The empty address (the identity for append). Equivalent to fromParts([]).

empty

Type: Address

Convert an array of address parts to an address. The input must be a non-null array of non-null strings, none of which contains the NUL character. This is the inverse of toParts.

fromParts(parts: $ReadOnlyArray<string>): Address
Parameters
parts ($ReadOnlyArray<string>)
Returns
Address

Convert an address to the array of parts that it represents. This is the inverse of fromParts.

toParts(address: Address): Array<string>
Parameters
address (Address)
Returns
Array<string>

Pretty-print an address. The result will be human-readable and contain only printable characters. Clients should not make any assumptions about the format.

toString(address: Address): string
Parameters
address (Address)
Returns
string

Construct an address by extending the given address with the given additional components. This function is equivalent to:

return fromParts([...toParts(address), ...components]);

but may be more efficient.

append(address: Address, components: ...Array<string>): Address
Parameters
address (Address)
components (...Array<string>)
Returns
Address

Test whether the given address has the given prefix. This function is equivalent to:

const prefixParts = toParts(prefix);
const addressParts = toParts(address);
const actualPrefix = addressParts.slice(0, prefixParts.length);
return deepEqual(prefix, actualPrefix);

(where deepEqual checks value equality on arrays of strings), but may be more efficient.

Note that this is an array-wise prefix, not a string-wise-prefix: e.g., toParts(["ban"]) is not a prefix of toParts(["banana"]).

hasPrefix(address: Address, prefix: Address): boolean
Parameters
address (Address)
prefix (Address)
Returns
boolean

Interpret the provided string as an Address.

Addresses are natively stored as strings. This method verifies that the provided "raw" address is actually an Address, so that you can have a type-level assurance that a string is an Address.

This is useful if e.g. you are loading serialized Addresses.

Throws an error if the string is not a valid Address.

fromRaw(raw: string): Address
Parameters
raw (string)
Returns
Address

A parser for Addresses.

Convenience wrapper fromRaw.

parser

Type: C.Parser<Address>

The name of this kind of address, like NodeAddress.

name

Type: string

A unique nonce for the runtime representation of this address. For compact serialization, this should be short; a single letter suffices.

nonce

Type: string

For the purposes of nice error messages: in response to an address of the wrong kind, we can inform the user what kind of address they passed (e.g., "expected NodeAddress, got EdgeAddress"). This dictionary maps another address module's nonce to the name of that module.

otherNonces

Type: Map<string, string>

Convert a string-keyed map to an object. Useful for conversion to JSON. If a map's keys are not strings, consider invoking mapKeys first.

toObject(map: $ReadOnlyMap<InK, InV>): {}
Parameters
map ($ReadOnlyMap<InK, InV>)
Returns
{}

Convert an object to a map. The resulting map will have key-value pairs corresponding to the enumerable own properties of the object in iteration order, as returned by Object.keys.

fromObject(object: {}): Map<K, V>
Parameters
object ({})
Returns
Map<K, V>

Shallow-copy a map, allowing upcasting its type parameters.

The Map type constructor is not covariant in its type parameters, which means that (e.g.) Map<string, Dog> is not a subtype of Map<string, Animal> even if Dog is a subtype of Animal. This is because, given a Map<string, Animal>, one can insert a Cat, which would break invariants of existing references to the variable as a map containing only Dogs.

declare class Animal {};
declare class Dog extends Animal {};
declare class Cat extends Animal {};
declare var dogMap: Map<string, Dog>;
const animalMap: Map<string, Animal> = dogMap;  // must fail
animalMap.set("tabby", new Cat());  // or we could do this...
(dogMap.values(): Iterator<Dog>);  // ...now contains a `Cat`!

This problem only exists when a map with existing references is mutated. Therefore, when we shallow-copy a map, we have the opportunity to upcast its type parameters: copy(dogMap) can be a Map<string, Animal>.

copy(map: $ReadOnlyMap<InK, InV>): Map<K, V>
Parameters
map ($ReadOnlyMap<InK, InV>)
Returns
Map<K, V>

Map across the keys of a map. Note that the key-mapping function is provided both the key and the value for each entry.

The key-mapping function must be injective on the map's key set. If it maps two distinct input keys to the same output key, an error may be thrown.

mapKeys(map: $ReadOnlyMap<InK, InV>, f: function (InK, InV): K): Map<K, V>
Parameters
map ($ReadOnlyMap<InK, InV>)
f (function (InK, InV): K)
Returns
Map<K, V>

Map across the values of a map. Note that the value-mapping function is provided both the key and the value for each entry.

There are no restrictions on the value-mapping function (in particular, it need not be injective).

mapValues(map: $ReadOnlyMap<InK, InV>, g: function (InK, InV): V): Map<K, V>
Parameters
map ($ReadOnlyMap<InK, InV>)
g (function (InK, InV): V)
Returns
Map<K, V>

Map simultaneously across the keys and values of a map.

The key-mapping function must be injective on the map's key set. If it maps two distinct input keys to the same output key, an error may be thrown. There are no such restrictions on the value-mapping function.

mapEntries(map: $ReadOnlyMap<InK, InV>, h: function (InK, InV): [K, V]): Map<K, V>
Parameters
map ($ReadOnlyMap<InK, InV>)
h (function (InK, InV): [K, V])
Returns
Map<K, V>

Merge maps without mutating the arguments.

Merges multiple maps, returning a new map which has every key from the source maps, with their corresponding values. None of the inputs are mutated. In the event that multiple maps have the same key, an error will be thrown.

merge(maps: $ReadOnlyArray<$ReadOnlyMap<K, V>>): Map<K, V>
Parameters
maps ($ReadOnlyArray<$ReadOnlyMap<K, V>>)
Returns
Map<K, V>

Merge multiple WeightedGraphs together.

This delegates to the semantics of Graph.merge and Weights.merge.

merge(ws: $ReadOnlyArray<WeightedGraph>): WeightedGraph
Parameters
ws ($ReadOnlyArray<WeightedGraph>)
Returns
WeightedGraph

Merge multiple Weights together.

The resultant Weights will have every weight specified by each of the input weights.

When there are overlaps (i.e. the same address is present in two or more of the Weights), then the appropriate resolver will be invoked to resolve the conflict. The resolver takes two weights and combines them to return a new weight.

When no resolvers are explicitly provided, merge defaults to conservative "error on conflict" resolvers.

merge(ws: $ReadOnlyArray<WeightsT>, resolvers: {nodeResolver: NodeOperator, edgeResolver: EdgeOperator}?): WeightsT
Parameters
ws ($ReadOnlyArray<WeightsT>)
resolvers ({nodeResolver: NodeOperator, edgeResolver: EdgeOperator}?)
Returns
WeightsT

Given a map whose values are arrays, push an element onto the array corresponding to the given key. If the key is not in the map, first insert it with value a new empty array.

If the key is already in the map, its value will be mutated, not replaced.

pushValue(map: Map<K, Array<V>>, key: K, value: V): Array<V>
Parameters
map (Map<K, Array<V>>)
key (K)
value (V)
Returns
Array<V>

Given a Map, transform its entries into an Array using a provided transformer function.

mapToArray(map: $ReadOnlyMap<K, V>, fn: function (pair: [K, V], index: number): R): Array<R>
Parameters
map ($ReadOnlyMap<K, V>)
fn (function (pair: [K, V], index: number): R)
Returns
Array<R>

This module contains the Graph, which is one of the most fundamental pieces of SourceCred. SourceCred uses this graph to model all of the contributions that make up a project, and the relationships between those contributions.

If you aren't familiar with computer science graphs, now would be a good time to refresh. See this StackOverflow answer for an introduction, and Wikipedia for a more thorough overview. This Graph is used by SourceCred as a "Contribution Graph", where every node is a contribution or contributor (e.g. a pull request, or a GitHub user identity) and every edge represents a connection between contributions or contributors (e.g. a pull request contains a comment, or a comment is authored by a user).

The Graph serves a simple function: it keeps track of which Nodes exist, and what Edges join those nodes to each other. Nodes and Edges are both identified by Addresses; specifically, NodeAddressTs and EdgeAddressTs.

In both cases, addresses are modeled as arrays of strings. For example, you might want to give an address to your favorite node. You can do so as follows:

const myAddress: NodeAddressT = NodeAddress.fromParts(["my", "favorite"])

Edge Addresses are quite similar, except you use the EdgeAddress module.

We model addresses as arrays of strings so that plugins can apply hierarchical namespacing for the address. In general, for any address, the first piece should be the name of the organization that owns the plugin, and the second piece should be the name of the plugin. Pieces thereafter are namespaced by the plugin's internal logic. For example, SourceCred has a Git plugin, and that plugin produces addresses like ["sourcecred", "git", "commit", "9cba0e9e212a287ce26e8d7c2d273e1025c9f9bf"].

This enables "prefix matching" for finding only certain types of nodes. For example, if we wanted to find every Git commit in the graph, we could use the following code:

const commitPrefix = NodeAddress.fromParts(["sourcecred", "git", "commit"]); const commitNodes = graph.nodes({prefix: commitPrefix});

The graph represents nodes as the Node data type, which includes an address (NodeAddressT) as well as a few other fields that are needed for calculating and displaying cred. The Graph is intended to be a lightweight data structure, so only data directly needed for cred analysis is included. If there's other data you want to store (e.g. the full text of posts that are tracked in the graph), you can use the node address as a key for a separate database.

Edges are represented by Edge objects. They have src and dst fields. These fields represent the "source" of the edge and the "destination" of the edge respectively, and both fields contain NodeAddressTs. The edge also has its own address, which is an EdgeAddressT.

Graphs are allowed to contain Edges whose src or dst are not present. Such edges are called 'Dangling Edges'. An edge may convert from dangling to non-dangling (if it is added before its src or dst), and it may convert from non-dangling to dangling (if its src or dst are removed).

Supporting dangling edges is important, because it means that we can require metadata be present for a Node (e.g. its creation timestamp), and still allow graph creators that do not know a node's metadata to create references to it. (Of course, they still need to know the node's address).

Here's a toy example of creating a graph:

Graph has a number of accessor methods:

  • hasNode to check if a node address is in the Graph
  • node to retrieve a node by its address
  • nodes to iterate over the nodes in the graph
  • hasEdge to check if an edge address is in the Graph
  • isDanglingEdge to check if an edge is dangling
  • edge to retrieve an edge by its address
  • edges to iterate over the edges in the graph
  • neighbors to find all the edges and nodes adjacent to a node (also supports filtering by direction, by node prefix, or edge prefix)

The Graph also has a few other convenience methods, like toJSON/fromJSON for serialization, and Graph.merge for combining multiple graphs.

NodeAddressT

Represents a node in the graph.

Node
Properties
address (NodeAddressT)
description (string)
timestampMs ((TimestampMs | null))

An edge between two nodes.

Edge
Properties
address (EdgeAddressT)
src (NodeAddressT)
dst (NodeAddressT)
timestampMs (TimestampMs)

Specifies how to contract a graph, collapsing several old nodes into a single new node, and re-writing edges for consistency.

NodeContraction
Properties
old ($ReadOnlyArray<NodeAddressT>)
replacement (Node)

This module adds a system for specifying "bonus minting" policies. The core idea for bonus minting is that extra Cred is conjured out of thin air (as a "bonus") and distributed to a chosen recipient. This system is intended to be used for minting Cred for project-level dependencies. For example, we would like users of SourceCred to mint some extra Cred and flow it to the SourceCred project.

In CredRank, we handle this by creating extra nodes in the graph which mint the bonus Cred, and it flows directly from those nodes to the intended recipients.

The total amount of Cred that may be minted is unbounded; for example, if the dependencies have a total weight of 0.2, then the total Cred will be 120% of the base Cred, but if the dependencies had a total weight of 1, then the total Cred would be double the base Cred. This was a deliberate design decision so that dependency minting would feel "non-rival", i.e. there is not a fixed budget of dependency cred that must be split between the dependencies. In some cases, it may be reasonable for the total Cred flowing to a project's dependencies to be larger than the total Cred flowing directly to the project's contributors; consider that the total amount of time/effort invested in building all the dependencies may be orders of magnitude larger than investment in the project itself.

Graph
Static Members
fromJSON(compatJson)
merge(graphs)
Instance Members
_reference(n)
_unreference(n)
modificationCount()
addNode(node)
removeNode(a)
hasNode(a)
node(address)
nodes(options?)
addEdge(edge)
removeEdge(address)
hasEdge(address)
isDanglingEdge(address)
edge(address)
edges(options)
neighbors(node, options)
equals(that)
copy()
toJSON()
contractNodes(contractions)

Convert a node into a human readable string.

The precise behavior is an implementation detail and subject to change.

nodeToString(node: Node): string
Parameters
node (Node)
Returns
string

Convert an edge into a human readable string.

The precise behavior is an implementation detail and subject to change.

edgeToString(edge: Edge): string
Parameters
edge (Edge)
Returns
string

Convert an edge to an object whose fields are human-readable. This is useful for storing edges in human-readable formats that should not include NUL characters, such as Jest snapshots.

edgeToStrings(edge: Edge): {address: string, src: string, dst: string, timestampMs: TimestampMs}
Parameters
edge (Edge)
Returns
{address: string, src: string, dst: string, timestampMs: TimestampMs}

Load an object from compatibilized state created by toCompat. The object has an expected type and version, and may optionally have handler functions for transforming previous versions into a canonical state. If a handler is present for the current version, it will be applied. Throws an error if the compatibilized object is the wrong type, or if its version is not current and there was no handler for its version.

fromCompat(expectedCompatInfo: CompatInfo, obj: Compatible<any>, handlers: {}?): T
Parameters
expectedCompatInfo (CompatInfo)
obj (Compatible<any>)
handlers ({}?)
Returns
T

Utilities for working with nullable types: ?T = T | null | void.

These functions use the native runtime representation, as opposed to creating an Optional<T> wrapper class. This ensures that they have minimal runtime cost (just a function call), and that they are trivially interoperable with other code.

When a value of type ?T is null or undefined, we say that it is absent. Otherwise, it is present.

Some functions that typically appear in such libraries are not needed:

  • join (??T => ?T) can be implemented as the identity function, because the Flow types ??T and ?T are equivalent;
  • flatMap (?T => (T => ?U) => ?U) can be implemented simply as map, again because ??T and ?T are equivalent;
  • first (?T => ?T => ?T) can be implemented simply as orElse, again because ??T and ?T are equivalent;
  • isPresent (?T => boolean) doesn't provide much value over the equivalent abstract disequality check;
  • constructors like empty (() => ?T) and of (T => ?T) are entirely spurious.

Other functions could reasonably be implemented, but have been left out because they have rarely been needed:

  • filter (?T => (T => boolean) => ?T);
  • forEach (?T => (T => void) => void);
  • orElseGet (?T => (() => T) => T), which is useful in the case where constructing the default value is expensive.

(Of these three, orElseGet would probably be the most useful for our existing codebase.)

map(x: T?, f: function (T): U): U?
Parameters
x (T?)
f (function (T): U)
Returns
U?

Apply the given function inside the nullable. If the input is absent, then it will be returned unchanged. Otherwise, the given function will be applied.

map(x: T?, f: function (T): U): U?
Parameters
x (T?)
f (function (T): U)
Returns
U?

Extract the value from a nullable. If the input is present, it will be returned. Otherwise, an error will be thrown with the provided message (defaulting to the string representation of the absent input).

get(x: T?, errorMessage: string?): T
Parameters
x (T?)
errorMessage (string?)
Returns
T

Extract the value from a nullable. If the input is present, it will be returned. Otherwise, an error will be thrown, with message given by the provided function.

orThrow(x: T?, getErrorMessage: function (): string): T
Parameters
x (T?)
getErrorMessage (function (): string)
Returns
T

Extract the value from a nullable, using the provided default value in case the input is absent.

orElse(x: T?, defaultValue: T): T
Parameters
x (T?)
defaultValue (T)
Returns
T

Filter nulls and undefined out of an array, returning a new array.

The functionality is easy to implement without a util method (just call filter); however Flow doesn't infer the type of the output array based on the callback that was passed to filter. This method basically wraps filter in a type-aware way.

filterList(xs: $ReadOnlyArray<T?>): Array<T>
Parameters
xs ($ReadOnlyArray<T?>)
Returns
Array<T>

The WeightedGraph a Graph alongside associated Weights

Any combination of Weights and Graph can make a valid WeightedGraph. If the Weights contains weights for node or edge addresses that are not present in the graph, then those weights will be ignored. If the graph contains nodes or edges which do not correspond to any weights, then default weights will be inferred.

WeightedGraph
Properties
graph (Graph)
weights (WeightsT)

Create a new, empty WeightedGraph.

empty(): WeightedGraph
Returns
WeightedGraph

Creates new, empty weights.

empty(): WeightsT
Returns
WeightsT

Create a new WeightedGraph where default weights have been overriden.

This takes a base WeightedGraph along with a set of "override" weights. The new graph has the union of both the base and override weights; wherever there is a conflict, the override weights will replace the base weights. This is useful in situations where we want to let the user manually specify some weights, and ensure that the user's decisions will trump any defaults.

This method does not mutuate any of the original arguments. For performance reasons, it is not a full copy; the input and output WeightedGraphs have the exact same underlying Graph, which should not be modified.

overrideWeights(wg: WeightedGraph, overrides: WeightsT): WeightedGraph
Parameters
overrides (WeightsT)
Returns
WeightedGraph

Represents the weight for a particular Node (or node address prefix). Weight 1 is the default value and signifies normal importance. Weights are linear, so 2 is twice as important as 1.

NodeWeight

Type: number

Represents the forwards and backwards weights for a particular Edge (or edge address prefix). Weight 1 is the default value and signifies normal importance. Weights are linear, so 2 is twice as important as 1.

EdgeWeight
Properties
forwards (number)
backwards (number)

Represents the weights for nodes and edges.

The weights are stored by address prefix, i.e. multiple weights may apply to a given node or edge.

WeightsT
Properties
nodeWeights (Map<NodeAddressT, NodeWeight>)
edgeWeights (Map<EdgeAddressT, EdgeWeight>)

Return an equivalent form of the given chain whose nodeOrder is the provided array, which must be a permutation of the node order of the original chain.

permute(old: OrderedSparseMarkovChain, newOrder: $ReadOnlyArray<NodeAddressT>): OrderedSparseMarkovChain
Parameters
old (OrderedSparseMarkovChain)
newOrder ($ReadOnlyArray<NodeAddressT>)
Returns
OrderedSparseMarkovChain

Return an equivalent form of the given chain such that for for each node, the entries in chain[node].neighbors are sorted.

normalizeNeighbors(old: OrderedSparseMarkovChain): OrderedSparseMarkovChain
Parameters
old (OrderedSparseMarkovChain)
Returns
OrderedSparseMarkovChain

The data inputs to running PageRank.

We keep these separate from the PagerankOptions below, because we expect that within a given context, every call to findStationaryDistribution (or other Pagerank functions) will have different PagerankParams, but often have the same PagerankOptions.

PagerankParams
Properties
pi0 (Distribution)
seed (Distribution)
alpha (number)

PagerankOptions allows the user to tweak PageRank's behavior, especially around convergence.

PagerankOptions
Properties
verbose (boolean)
convergenceThreshold (number)
maxIterations (number)
yieldAfterMs (number)

A representation of a sparse transition matrix that is convenient for computations on Markov chains.

A Markov chain has nodes indexed from 0 to n - 1, where n is the length of the chain. The elements of the chain represent the incoming edges to each node. Specifically, for each node v, the in-degree of v equals the length of both chain[v].neighbor and chain[v].weight. For each i from 0 to the degree of v (exclusive), there is an edge to v from chain[v].neighbor[i] with weight chain[v].weight[i].

In other words, chain[v] is a sparse-vector representation of column v of the transition matrix of the Markov chain.

SparseMarkovChain

Type: $ReadOnlyArray<{neighbor: Uint32Array, weight: Float64Array}>

A distribution over the integers 0 through n - 1, where n is the length of the array. The value at index i is the probability of i in the distribution. The values should sum to 1.

Distribution

Type: Float64Array

Compute the maximum difference (in absolute value) between components in two distributions.

Equivalent to $\norm{pi0 - pi1}_\infty$.

computeDelta(pi0: Distribution, pi1: Distribution): number
Parameters
Returns
number

Data structure representing a particular kind of Markov process, as kind of a middle ground between the semantic SourceCred graph (in the core/graph module) and a literal transition matrix. Unlike the core graph, edges in a Markov process graph are unidirectional, edge weights are raw transition probabilities (which must sum to 1) rather than unnormalized weights, and there are no dangling edges. Unlike a fully general transition matrix, parallel edges are still reified, not collapsed; nodes have weights, representing sources of flow; and a few SourceCred-specific concepts are made first-class: specifically, cred minting and time period fibration. The "teleportation vector" from PageRank is also made explicit via the "adjoined seed node" graph transformation strategy, so this data structure can form well-defined Markov processes even from graphs with nodes with no out-weight. Because the graph reifies the teleportation and temporal fibration, the associated parameters are "baked in" to weights of the Markov process graph.

We use the term "fibration" to refer to a graph transformation where each scoring node is split into one node per epoch, and incident edges are rewritten to point to the appropriate epoch nodes. The term is vaguely inspired from the notion of a fiber bundle, though the analogy is not precise.

The Markov process graphs in this module have three kinds of nodes:

  • base nodes, which are in 1-to-1 correspondence with the nodes in the underlying core graph that are not scoring nodes;
  • user-epoch nodes, which are created for each time period for each scoring node; and
  • epoch accumulators, which are created once for each epoch to aggregate over the epoch nodes,
  • the seed node, which reifies the teleportation vector and forces well-definedness and ergodicity of the Markov process (for nonzero alpha, and assuming that there is at least one edge in the underlying graph).

The edges include:

  • base edges due to edges in the underlying graph, whose endpoints are lifted to the corresponding base nodes or to user-epoch nodes for endpoints that have been fibrated;
  • radiation edges edges from nodes to the seed node;
  • minting edges from the seed node to cred-minting nodes;
  • webbing edges between temporally adjacent user-epoch nodes; and
  • payout edges from a user-epoch node to the accumulator for its epoch.

A Markov process graph can be converted to a pure Markov chain for spectral analysis via the toMarkovChain method.

deepFreeze

Find an epoch node, or just the original node if it's not a scoring address.

rewriteEpochNode
Parameters
address (NodeAddressT)
edgeTimestampMs (TimestampMs)
Returns
NodeAddressT

Return the node address's canonical index in the node order, if it is present.

nodeIndex(address: NodeAddressT): (number | null)
Parameters
address (NodeAddressT)
Returns
(number | null)

Returns a canonical ordering of the nodes in the graph.

No assumptions should be made about the node order, other than that it is stable for any given MarkovProcessGraph.

nodeOrder(): void
Returns
void

Iterate over the nodes in the graph. If a prefix is provided, only nodes matching that prefix will be returned.

The nodes are always iterated over in the node order.

nodes(options: {prefix: NodeAddressT}?): void
Parameters
options ({prefix: NodeAddressT}?)
Returns
void

Return the edge address's canonical index in the edge order, if it is present.

edgeIndex(address: MarkovEdgeAddressT): (number | null)
Parameters
address (MarkovEdgeAddressT)
Returns
(number | null)

Returns a canonical ordering of the edges in the graph.

No assumptions should be made about the edge order, other than that it is stable for any given MarkovProcessGraph.

edgeOrder(): void
Returns
void

Iterate over the edges in the graph.

The edges are always iterated over in the edge order.

edges(): void
Returns
void

Yield the canonical node order. This has been separated from the class because we need it at construction time, etc.

_nodeOrder(nodes: $ReadOnlyMap<NodeAddressT, MarkovNode>, epochStarts: $ReadOnlyArray<TimestampMs>, participants: $ReadOnlyArray<Participant>): void
Parameters
nodes ($ReadOnlyMap<NodeAddressT, MarkovNode>)
epochStarts ($ReadOnlyArray<TimestampMs>)
participants ($ReadOnlyArray<Participant>)
Returns
void

Return an array containing the node addresses for every virtualized node. The order must be stable.

virtualizedNodeAddresses(epochStarts: $ReadOnlyArray<TimestampMs>, participants: $ReadOnlyArray<Participant>): void
Parameters
epochStarts ($ReadOnlyArray<TimestampMs>)
participants ($ReadOnlyArray<Participant>)
Returns
void

This module allows participants to attribute their cred to other participants. This feature should not be used to make cred sellable/transferable, but instead is intended to allow participants to acknowledge that a portion of their creditted outputs are directly generated/supported by the labor of others. (e.g. when a contributor has a personal assistant working behind the scenes)

PersonalAttributionProportion
Properties
timestampMs (TimestampMs)
proportionValue (number)

A timestamped configuration representing a decimal proportion of cred flow, which can be applied to a participant pair.

PersonalAttributionProportion
Properties
timestampMs (TimestampMs)
proportionValue (number)

A recipient of cred attribution and a chronological log of proportion configurations.

AttributionRecipient
Properties
toParticipantId (IdentityId)
proportions ($ReadOnlyArray<PersonalAttributionProportion>)

A participant that is attributing their cred, and a log of how they are attributing it.

PersonalAttribution
Properties
fromParticipantId (IdentityId)
recipients ($ReadOnlyArray<AttributionRecipient>)

A list of participants who are attributing their cred, with logs of how they are attributing it.

PersonalAttributions

Type: $ReadOnlyArray<PersonalAttribution>

Validates that:

  1. There is only 1 entry per fromParticipantId.
  2. Each fromParticipantId only has 1 entry per toParticipantId.
  3. Proportions are in chronological order.
  4. Proportions are a number between 0 and 1.
validatePersonalAttributions(personalAttributions: PersonalAttributions)
Parameters
personalAttributions (PersonalAttributions)

This is the intermediary data structure used to index personal attributions data, making lookups faster. It can be interpreted as:

$ReadOnlyMap< fromParticipantId, $ReadOnlyMap<toParticipantId, AttributionRecipient>

Index

Type: $ReadOnlyMap<IdentityId, $ReadOnlyMap<IdentityId, AttributionRecipient>>

An indexed store of PersonalAttributions that includes optimized queries needed by credrank.

new IndexedPersonalAttributions(personalAttributions: PersonalAttributions, epochStarts: $ReadOnlyArray<TimestampMs>)
Parameters
personalAttributions (PersonalAttributions)
epochStarts ($ReadOnlyArray<TimestampMs>)
Instance Members
_validateIndex(index, epochStarts)
toPersonalAttributions()
recipientsForEpochAndParticipant(epochStart, fromParticipantId)
getProportionValue(epochStart, fromParticipantId, toParticipantId)
getSumProportionValue(epochStart, fromParticipantId)

An analytics-only timestamp. Not built for continued functionality within a MarkovProcessGraph (where epoch nodes are generated and used instead).

timestampMs

Type: (TimestampMs | null)

This module contains logic for creating nodes and edges that act as "gadgets" in CredRank. They are most directly used by markovProcessGraph.js

uuidFromString

A helper function for creating a gadget only produces edges incident to seed. We assume that it has a function for converting from the target type into node address parts, which will be used to produce a unique edge address, and which are the address parts for the src or dst. If seedIsSrc is true, then the seed is the src and the dst will be the target. Otherwise, the seed is the dst and the target will be the src. These markov edges are never reversed.

makeSeedGadget($0: MakeSeedGadgetArgs<T>): EdgeGadget<T>
Parameters
$0 (MakeSeedGadgetArgs<T>)
Name Description
$0.edgePrefix any
$0.seedIsSrc any
$0.toParts any
$0.fromParts any
Returns
EdgeGadget<T>

The payout gadget creates edges that connect participant epoch nodes to the epoch accumulator nodes. Each payout edge represents the flow of Cred from a participant's epoch back to the seed (by means of the accumulator). Thus, the Cred flow on this edge actually represents Cred score for the participant. (The Cred score of the epoch node can't be seen as the user's score, because some of it flows to other contributions, to other epoch nodes, etc.)

payoutGadget

Type: EdgeGadget<ParticipantEpochAddress>

The forward webbing edges flow Cred forwards from participant epoch nodes to the temporally next epoch node from the same participant. The intention is to "smooth out" Cred over time by having some of it flow forwards in time.

forwardWebbingGadget

Type: EdgeGadget<WebbingAddress>

The backward webbing edges flow Cred backwards from participant epoch nodes to the temporally previous epoch node from the same participant. The intention is to "smooth out" Cred over time by having some of it flow backwards in time.

backwardWebbingGadget

Type: EdgeGadget<WebbingAddress>

This module exposes a class that accesses participant data, aggregating between a CredGraph and a Ledger.

It is useful for cases where you want to view a participant's Cred and Grain data simultaneously, for example for creating summary dashboards.

sum

This module outputs aggregated data that combines Cred Scores with Ledger Account data.

We use this internally when creating Grain distributions using a Ledger and a Cred View. It's also an experimental output format which gives overall information on the cred in an instance. We may remove it or make breaking changes to it in the future.

sum

Sum a sequence of Grain values.

sum(xs: Iterable<Grain>): Grain
Parameters
xs (Iterable<Grain>)
Returns
Grain

Cred and Grain data for a given participant.

Implicitly has an associated time scope, which will be the time scope of the CredGrainView or TimeScopedCredGrainView that generated this.

The indices of credPerInterval/grainEarnedPerInterval correspond to the same indices in the IntervalSequence of the CredGrainView or TimeScopedCredGrainView that generated this.

ParticipantCredGrain
Properties
active (boolean)
identity (Identity)
cred (number)
credPerInterval ($ReadOnlyArray<number>)
grainEarned (Grain)
grainEarnedPerInterval ($ReadOnlyArray<Grain>)

Aggregates data across a CredGraph and Ledger.

By default, it includes data across all time present in the instance. Callers can call withTimeScope to get a TimeScopedCredGrainView which returns data that only includes a continuous subset of cred and grain data across time.

new CredGrainView(participants: $ReadOnlyArray<ParticipantCredGrain>, intervals: IntervalSequence)
Parameters
participants ($ReadOnlyArray<ParticipantCredGrain> = [])
intervals (IntervalSequence = intervalSequence([]))
Static Members
fromCredGraphAndLedger(credGraph, ledger)
fromScoredContributionsAndLedger(scoredContributions, ledger, startTimeMs)
fromCredGrainViews(views)
Instance Members
participant(id)

This class's contructor stores a continuous subset of the originalIntervals and participant cred/grain data where intervals are only included if their start and end times are both within the provided startTimeMs and endTimeMs, inclusively.

new TimeScopedCredGrainView(credGrainView: CredGrainView, startTimeMs: TimestampMs, endTimeMs: TimestampMs)
Parameters
credGrainView (CredGrainView)
startTimeMs (TimestampMs)
endTimeMs (TimestampMs)

Represents a time interval The interval is half open [startTimeMs, endTimeMs), i.e. if a timestamp is exactly on the interval boundary, it will fall at the start of the older interval.

Interval
Properties
startTimeMs (TimestampMs)
endTimeMs (TimestampMs)

An interval sequence is an array of intervals with the following guarantees:

  • Every interval has positive time span (i.e. the end time is greater than the start time).
  • Every interval except for the first starts at the same time that the previous interval ended.
  • No interval may have a NaN start or end time. (Infinity is OK.)
IntervalSequence

Represents a slice of a time-partitioned graph Includes the interval, as well as all of the nodes and edges whose timestamps are within the interval.

GraphInterval
Properties
interval (Interval)
nodes ($ReadOnlyArray<Node>)
edges ($ReadOnlyArray<Edge>)

Partition a graph based on time intervals.

The intervals are always one week long, as calculated using d3.utcWeek. The result may contain empty intervals. If the graph is empty, no intervals are returned. Timeless nodes are not included in the partition, nor are dangling edges.

partitionGraph(graph: Graph): GraphIntervalPartition
Parameters
graph (Graph)
Returns
GraphIntervalPartition

Produce an array of Intervals which cover all the node and edge timestamps for a graph.

The intervals are one week long, and are aligned on clean week boundaries.

This function is basically a wrapper around weekIntervals that makes sure the graph's nodes and edges are all accounted for properly.

graphIntervals(graph: Graph): IntervalSequence
Parameters
graph (Graph)
Returns
IntervalSequence

Produce an array of week-long intervals to cover the startTime and endTime.

Each interval is one week long and aligned on week boundaries, as produced by d3.utcWeek. The weeks always use UTC boundaries to ensure consistent output regardless of which timezone the user is in.

Assuming that the inputs are valid, there will always be at least one interval, so that that interval can cover the input timestamps. (E.g. if startMs and endMs are the same value, then the produced interval will be the start and end of the last week that starts on or before startMs.)

weekIntervals(startMs: number, endMs: number): IntervalSequence
Parameters
startMs (number)
endMs (number)
Returns
IntervalSequence

Sorting utility. Accepts an array and optionally any number of "pluck" functions to get the value to sort by. Will create a shallow copy, and sort in ascending order.

  • arr: The input array to sort
  • pluckArgs: (0...n) Functions to get the value to sort by. Defaults to identity.
sortBy(arr: $ReadOnlyArray<T>, pluckArgs: ...Array<PluckFn<T>>): Array<T>
Parameters
arr ($ReadOnlyArray<T>)
pluckArgs (...Array<PluckFn<T>>)
Returns
Array<T>

This module contains the ledger, for accumulating state updates related to identity identities and Grain distribution.

A key requirement for the ledger is that we need to store an ordered log of every action that's happened in the ledger, so that we can audit the ledger state to ensure its integrity.

identityTypeParser

Timestamped record of a grain payment made to an Identity from a specific Allocation.

AllocationReceipt
Properties
allocationId (AllocationId)
grainReceipt (GrainReceipt)
credTimestampMs (TimestampMs)

The state of the Ledger's accounting configuration.

AccountingStatus
Properties
enabled (boolean)
trackGrainIntegration (boolean)
currency (Currency?)

Every Identity in the ledger has an Account.

MutableAccount
Properties
identity (Identity)
balance (G.Grain)
paid (G.Grain)
allocationHistory (Array<AllocationReceipt>)
active (boolean)
payoutAddresses (PayableAddressStore)
mergedIdentityIds (Array<IdentityId>)
Static Members
balance

PayableAddressStore maps currencies to a participant's address capable of accepting the currency. This structure exists to accomodate safe migration for grain/payout token changes. Users must verify themselves that the address they are supplying is capable of receiving their share of a grain distribution.

PayableAddressStore

Type: Map<CurrencyKey, PayoutAddress>

The Ledger is an append-only auditable data store which tracks

  • Identities and what aliases they possess
  • Identities' grain balances

Every time the ledger state is changed, a corresponding Action is added to the ledger's action log. The ledger state may be serialized by saving the action log, and then reconstructed by replaying the action log. The corresponding methods are actionLog and Ledger.fromActionLog.

None of these methods are idempotent, since they all modify the Ledger state on success by adding a new action to the log. Therefore, they will all fail if they would not cause any change to the ledger's logical state, so as to prevent the ledger from permanently accumulating no-op clutter in the log.

It's important that any API method that fails (e.g. trying to add a conflicting identity) fails without mutating the ledger state; this way we avoid ever getting the ledger in a corrupted state. To make this easier to test, the test code uses deep equality testing on the ledger before/after attempting illegal actions. To ensure that this testing works, we should avoid adding any ledger state that can't be verified by deep equality checking (e.g. don't store state in functions or closures that aren't attached to the Ledger object).

Every Ledger action has a timestamp, and the Ledger's actions must always be in timestamp-sorted order. Adding a new Action with a timestamp older than a previous action is illegal.

new Ledger()
Static Members
fromEventLog(log)
parse(eventLog)
Instance Members
accounts()
account(id)
nameAvailable(name)
accountByAddress(address)
accountByName(name)
allocation(id)
allocations()
distribution(id)
distributions()
distributionByAllocationId(allocationId)
createIdentity(type, name)
mergeIdentities(opts)
renameIdentity(identityId, newName)
addAlias(identityId, alias)
activate(id)
deactivate(id)
distributeGrain(distribution)
transferGrain(opts)
setPayoutAddress(id, payoutAddress, chainId, tokenAddress?)
enableAccounting()
disableAccounting()
enableIntegrationTracking()
disableIntegrationTracking()
markDistributionExecuted(id)
setExternalCurrency(chainId, tokenAddress?)
removeExternalCurrency()
accounting()
externalCurrency()
trackedDistributions()
isGrainIntegrationExecuted(id)
eventLog()
serialize()
lastDistributionTimestamp()
_deactivateAccountsWithoutPayoutAddress()
_eraseLedgerBalances()

The Actions are used to store the history of Ledger changes.

Action

Type: (CreateIdentity | RenameIdentity | AddAlias | MergeIdentities | ToggleActivation | DistributeGrain | TransferGrain | ChangeIdentityType | SetPayoutAddress | EnableGrainIntegration | DisableGrainIntegration | MarkDistributionExecuted | EnableAccounting | DisableAccounting | SetExternalCurrency | RemoveExternalCurrency)

EvmChainId is represented in the form of a stringified integer for all EVM-based chains, including mainnet (1), and xDai (100). The reason for this is that ethereum's client configuration utilizes a number to represent chainId, and this way we can just transpose that chainId here as a component of the currency Id, since the web3 client will return a stringified integer when the chainId is requested.

EvmChainId

tokenAddress is a subset of all available EthAddresses.

A token address is the address of the token contract for an ERC20 token, or the 20 byte-length equivalent of 0x0, which is the conventional address used to represent ETH on the ethereum mainnet, or the native currency on an EVM-based sidechain. See here for more details on these semantics: https://ethereum.org/en/developers/docs/intro-to-ethereum/#eth

tokenAddress

Type: EthAddress

Example protocol symbols: "BTC" for bitcoin and "FIL" for Filecoin

ProtocolSymbol

Chains like Bitcoin and Filecoin do not have "production" sidechains so we represent them as a string, as specified in the ProtocolSymbol type

Protocol
Properties
type ("PROTOCOL")
chainId (ProtocolSymbol)

The Currency key must be stringified to ensure the data is retrievable. Keying on the raw Currency object means keying on the object reference, rather than the contents of the object.

CurrencyKey

Generate a uniformly random clean ID.

random(): Uuid
Returns
Uuid

Parse a serialized UUID. This is the left inverse of the trivial injection from Uuid to string, and throws on invalid input.

fromString(s: string): Uuid
Parameters
s (string)
Returns
Uuid

Parse a serialized UUID. This expects to parse a JSON string value with the same semantics as fromString.

parser

Type: C.Parser<Uuid>

Fill the given buffer with cryptographically secure random bytes. The buffer length must not exceed 65536.

getRandomValues(buf: Uint8Array): Uint8Array
Parameters
buf (Uint8Array)
Returns
Uint8Array

In SourceCred, projects regularly distribute Grain to contributors based on their Cred scores. This is called a "Distribution". This module contains the logic for computing distributions.

G

JsonLog tracks and serializes append-only logs of JSON values.

At its heart, it's basically a simple wrapper around an array, which enforces the rule that items may be appended to it, but never removed.

It also provides serialization logic. We store the log as a newline-delimited stream of JSON values, with a one-to-one correspondence between POSIX lines and elements in the sequence. That is, the serialized form of an element will never contain an embedded newline, and there are no empty lines. JSON streams can be easily inspected and manipulatedwith tools like jq as well as standard Unix filters, and can be stored and transmitted efficiently in Git repositories thanks to packfiles and delta compression.

Elements of a JsonLog are always parsed using a Combo.Parser, which ensures type safety at runtime.

new JsonLog()

Iteratively compute and distribute Grain, based on the provided CredGraph, into the provided Ledger, according to the specified DistributionPolicy.

Here are some examples of how it works:

  • The last time there was a distribution was two days ago. Since then, no new Cred Intervals have been completed. This method will no-op.

  • The last time there was a distribution was last week. Since then, one new Cred Interval has been completed. The method will apply one Distribution.

  • The last time there was a distribution was a month ago. Since then, four Cred Intervals have been completed. The method will apply four Distributions, unless maxOldDistributions is set to a lower number (e.g. 2), in which case that maximum number of distributions will be applied.

It returns the list of applied distributions, which may be helpful for diagnostics, printing a summary, etc.

applyDistributions(config: GrainConfig, credGrainView: CredGrainView, ledger: Ledger, currentTimestamp: TimestampMs, allowMultipleDistributionsPerInterval: boolean): $ReadOnlyArray<Distribution>
Parameters
config (GrainConfig)
credGrainView (CredGrainView)
ledger (Ledger)
currentTimestamp (TimestampMs)
allowMultipleDistributionsPerInterval (boolean)
Returns
$ReadOnlyArray<Distribution>

Compute a single Distribution using CredAccountData.

The distribution will include the provided policies. It will be computed using only Cred intervals that are finished as of the effectiveTimestamp.

Note: This method is untested as it is just a bit of plubming; flow gives me confidence that the semantics are correct. *

computeDistribution(policies: $ReadOnlyArray<AllocationPolicy>, credGrainView: CredGrainView, effectiveTimestamp: TimestampMs): Distribution
Parameters
policies ($ReadOnlyArray<AllocationPolicy>)
credGrainView (CredGrainView)
effectiveTimestamp (TimestampMs)
Returns
Distribution

This module contains the types for tracking Grain, which is the native project-specific, cred-linked token created in SourceCred instances. In practice, projects can call these tokens anything they want, but we will refer to the tokens as "Grain" throughout the codebase. The conserved properties of all Grains are that they are minted/distributed based on cred scores, and that they can be used to Boost contributions in a cred graph.

We track Grain using big integer arithmetic, so that we can be precise with Grain values and avoid float imprecision issues. Following the convention of ERC20 tokens, we track Grain at 18 decimals of precision, although we can make this project-specific if there's a future need.

At rest, we represent Grain as strings. This is a convenient decision around serialization boundaries, so that we can just directly stringify objects containing Grain values and it will Just Work. The downside is that we need to convert them to/fro string representations any time we need to do Grain arithmetic, which could create perf hot spots. If so, we can factor out the hot loop and do them in a way that has less overhead. You can see context for this decision in #1936 and #1938.

Ideally, we would just use the native BigInt type. However, at time of writing it's not well supported by flow or Safari, so we use the big-integer library. That library delegates out to native BigInt when available, so this should be fine.

Since the big-integer library does have a sensible toString method defined on the integers, we could switch to representing Grain at rest via big-integers rather than as strings. However, this would require re-writing a lot of test code. If perf becomes an issue that would be a principled fix.

Grain

Formats a grain balance as a human-readable number, dividing the raw grain balance by one.

The client controls how many digits of precision are shown; by default, we display zero digits. Grain balances will have commas added as thousands-separators if the balance is greater than 1000g.

The client also specifies a suffix; by default, we use 'g' for grain.

Here are some examples of its behavior, pretending that we use 2 decimals of precision for readability:

format(133700042n) === "1,337,000g" format(133700042n, 2) === "1,337,000.42g" format(133700042n, 2, "seeds") === "1,337,000.42seeds" format(133700042n, 2, "") === "1,337,000.42"

format(grain: Grain, decimals: number, suffix: string): string
Parameters
grain (Grain)
decimals (number = 0)
suffix (string = DEFAULT_SUFFIX)
Returns
string

Formats a grain balance as a human-readable number using the format() method, but trims any unnecessary decimal information.

The intended use is for UI presentation where less visual clutter is desired.

Here are some examples of its behavior

formatAndTrim(100000000000000) === "0.0001g" formatAndTrim(150000000000000000000) === "150g" formatAndTrim(15000000000000000000000) === "15,000g" formatAndTrim(15000000000000000000000, "seeds") === "15,000seeds" formatAndTrim(15000000000000000000000, "") === "15,000"

formatAndTrim(grain: Grain, suffix: string): string
Parameters
grain (Grain)
suffix (string = DEFAULT_SUFFIX)
Returns
string

Multiply a grain amount by a floating point number.

Use this method when you need to multiply a grain balance by a floating point number, e.g. a ratio.

Note that this method is imprecise. It is not safe to assume, for example, that multiply(g, 1/3) + multiply(g, 2/3) === g due to loss of precision. However, the errors will be small in absolute terms (i.e. tiny compared to one full grain).

See some messy analysis of the numerical errors here: https://observablehq.com/@decentralion/grain-arithmetic

multiplyFloat(grain: Grain, num: number): Grain
Parameters
grain (Grain)
num (number)
Returns
Grain

Convert an integer number (in floating-point representation) into a precise Grain value.

fromInteger(x: number): Grain
Parameters
x (number)
Returns
Grain

Accept human-readable numbers strings and convert them to precise grain amounts

This is most useful for processing form input values before passing them into the ledger, since all form fields return strings

In this case, a "float string" is a string that returns a number value when passed into parseFloat

The reason to circumvent any floating point values is to avoid losses in precision. By modifying the string directly in a predictable pattern, we can convert uer-generated floating point values to grain at full fidelity, and avoid any fuzzy floating point arithmetic

The tradeoff here is around versatility. Values with more decimals than the allowable precision will yield an error when passed in.

fromFloatString(x: string, precision: number): Grain
Parameters
x (string)
precision (number = DECIMAL_PRECISION)
Returns
Grain

Approximately create a grain balance from a float.

This method tries to convert the floating point amt into a grain balance. For example, grain(1) approximately equals ONE.

Do not assume this will be precise! For example, grain(0.1337) results in 133700000000000016n. This method is intended for test code.

This is a shorthand for multiplyFloat(ONE, amt).

fromApproximateFloat(f: number): Grain
Parameters
f (number)
Returns
Grain

Approximates the division of two grain values

This naive implementation of grain division converts the given values to floats and performs simple floating point division.

Do not assume this will be precise!

toFloatRatio(numerator: Grain, denominator: Grain): number
Parameters
numerator (Grain)
denominator (Grain)
Returns
number

Splits a budget of Grain proportional to floating-point scores.

splitBudget guarantees that the total amount distributed will precisely equal the budget. This is a surprisingly challenging property to ensure, and it explains the complexity of this algorithm. We stress-test the method with extremely uneven share distribution (e.g. a split where some users' scores are 10**100 larger than others).

The algorithm can be arbitrarily unfair at the atto-Grain level; for example, in the case splitBudget(fromString("1"), [1, 100]) it will give all the Grain to the first account, even though it only has 1/100th the score of the second account. However, since Grain is tracked with 18 decimal point precision, these tiny biases mean very little in practice. In testing, when splitting one full Grain (i.e. 10**18 attoGrain), we haven't seen discrepancies over ~100 attoGrain, or one billion-million-th of a full Grain.

splitBudget(budget: Grain, scores: $ReadOnlyArray<number>): $ReadOnlyArray<Grain>
Parameters
budget (Grain)
scores ($ReadOnlyArray<number>)
Returns
$ReadOnlyArray<Grain>

Uncomment below if you want to measure the discrepancy caused by forcing fracion=1 whenever fraction > 1.

In testing, when distributing one full Grain across wildly unequal scores, it never produced more than ~hundreds of attoGrain discrepancy.

budgetRemaining

Uncomment below if you want to measure the discrepancy caused by this "giveaway-leftovers" approach. In testing, when run with wildly varying shares, it never produced more than ~hundreds of attoGrain discrepancy.

add

Shape of currencyDetails.json on disk

SerializedCurrencyDetails
Properties
currencyName (string?)
currencySuffix (string?)
decimalsToDisplay (number?)
integrationCurrency (IntegrationCurrency?)

Shape of currencyDetails json in memory after parsing

CurrencyDetails
Properties
name (string)
suffix (string)
decimals (number)
integrationCurrency (IntegrationCurrency?)

Utilized by combo.fmap to enforce default currency values when parsing. This engenders a "canonical default" since there will be no need to add default fallbacks when handling currency detail values after parsing the serialized file.

Parameters
Returns
CurrencyDetails

An Alias is basically another graph Node which resolves to this identity. We ignore the timestamp because it's generally not significant for users; we keep the address out of obvious necessity, and we keep the description so we can describe this alias in UIs (e.g. the ledger admin panel).

Alias
Properties
description (string)
address (NodeAddressT)

A leaf node in the Expression tree structure. It represents a trait that can be weighted atomically.

WeightOperand
Properties
key (string)
subkey (string?)

A recursive type that forms a tree-like structure of algebraic expressions. Can be evaluated as OPERATOR(...weightOperands, ...expressionOperands).

For example, if the operator is ADD, an expression could be written as: weightOperand1 + weightOperand2 + expressionOperand1 + ...

The recursive nature of this type allows complex composition of expressions: ADD(..., MULTIPLY(..., ADD(...)), MAX(...))

Expression
Properties
operator (OperatorOrKey)
description (string)
weightOperands ($ReadOnlyArray<WeightOperand>)
expressionOperands ($ReadOnlyArray<Expression>)
Static Members
operator
description
weightOperands
expressionOperands

A granular contribution that contains the root node of an Expression tree and also has an outgoing array of participants, creating a DAG-like structure.

Responsible for timestamping, containing granular participation details, and linking Expressions to Participants.

Contribution
Properties
id (string)
plugin (string)
type (string)
timestampMs (TimestampMs)
expression (Expression)
participants ($ReadOnlyArray<{id: NodeAddressT, shares: $ReadOnlyArray<WeightOperand>}>)

If the subkey is found, returns the subkey's weight. If the subkey is not found, returns the key's default. Throws if the key has not been set in the configuration.

getWeight($0: WeightOperand, config: WeightConfig): number
Parameters
Name Description
$0.key any
$0.subkey any
config (WeightConfig)
Returns
number

Returns true if the subkey exists in the subkeys array of the key. Returns false if the subkey does not exist in the subkeys array. Throws if the key has not been set in the configuration.

hasExplicitWeight($0: WeightOperand, config: WeightConfig): boolean
Parameters
Name Description
$0.key any
$0.subkey any
config (WeightConfig)
Returns
boolean

Semantically, allows weight configuration of different qualities/characteristics of contributions.

Technically, a once-nested key-value store that maps key-subkey pairs to weights and specifies a default weight at the key-level that can be used when a queried subkey is not found.

A Discord-based example might look like: { "key": "channel", "default": 1, "subkeys": [ { "subkey": "12345678", memo: "props", weight: 3 } ] }

WeightConfig

Type: $ReadOnlyArray<{key: string, default: number, subkeys: $ReadOnlyArray<{subkey: string, memo: string?, weight: number}>}>

Static Members
key
default
subkeys

A key-value store of configured operators, allowing the configuration of operators within an expression. For example, one might be able to configure that emoji reactions be added or multiplied.

OperatorConfig

Type: $ReadOnlyArray<{key: string, operator: Operator}>

Static Members
key
operator

Wraps the other config types, and defines a time scope via a start date. The end date will be inferred as the next highest start date in an array of Configs.

RawConfig
Properties
memo (string)
startDate (TimestampISO)
weights (WeightConfig)
operators (OperatorConfig)
shares (WeightConfig)
Static Members
memo

Groups Configs together by target strings that may represent a server ID/endpoint, a repository name, etc.

RawConfigsByTarget

A note or a human-readable description to make it easier to recognize this config. *

memo

Type: string

Takes a prefixed key and returns the configured operator queried by the non-prefixed key. Throws if the input is not properly prefixed. Throws if the key has not been set in the configuration. Throws if the configured operator is not a valid operator.

getOperator(rawKey: OperatorOrKey, config: Config): Operator
Parameters
rawKey (OperatorOrKey)
config (Config)
Returns
Operator

Utility function for getting the earliest start time of all configs in an array of ConfigsByTarget.

getEarliestStartForConfigs(configsByTargetArray: $ReadOnlyArray<ConfigsByTarget>): TimestampMs
Parameters
configsByTargetArray ($ReadOnlyArray<ConfigsByTarget>)
Returns
TimestampMs

We have a convention of using TimestampMs as our default representation. However TimestampISO has the benefit of being human readable / writable, so it's used for serialization and display as well. We'll validate types at runtime, as there's a fair chance we'll use these functions to parse data that came from a Flow any type (like JSON).

TimestampMs

Type: number

Creates a TimestampISO from a TimestampMs-like input.

Since much of the previous types have used number as a type instead of TimestampMs. Accepting number will give an easier upgrade path, rather than a forced refactor across the codebase.

toISO(timestampLike: (TimestampMs | number)): TimestampISO
Parameters
timestampLike ((TimestampMs | number))
Returns
TimestampISO

Creates a TimestampMs from a TimestampISO.

fromISO(timestampISO: (TimestampISO | string)): TimestampMs
Parameters
timestampISO ((TimestampISO | string))
Returns
TimestampMs

Validate that a number is potentially a valid timestamp.

This checks that the number is a finite integer, which avoids some potential numbers that are not valid timestamps.

validateTimestampMs(timestampMs: number): TimestampMs
Parameters
timestampMs (number)
Returns
TimestampMs

Generic adaptor for persisting a Ledger to some storage backend (e.g. GitHub, local filesystem, a database, etc)

LedgerStorage

Returns a list of LedgerEvents that have not been persisted to storage yet

_getLocalChanges(remoteLedger: Ledger): LedgerDiff
Parameters
remoteLedger (Ledger)
Returns
LedgerDiff

Returns a list of LedgerEvents in the persisted ledger that have not been applied to the local ledger.

_getRemoteChanges(remoteLedger: Ledger): LedgerDiff
Parameters
remoteLedger (Ledger)
Returns
LedgerDiff

Persists the local (in-memory) Ledger to the ledger storage. Reloads the remote ledger from storage right before persisting it to minimize the possibility of overwriting remote changes that were not synced to the local ledger and ensure consistency of the ledger events (e.g. no double spends).

A race condition is present in this function: if client A runs reloadLedger and then client B writes to the remote ledger before client A finishes writing, then the changes to the ledger that client B made would be overwritten by the changes from client A. The correctness and consistency of the ledger will still be maintained, its just that client B might experience data loss of whatever events they were trying to sync. To detect if this has occurred, we reload the ledger again after writing the data to ensure the local changes were not overwritten. If they were, we can show an error message to client B with a list of changes that failed to sync.

persist(storageSetArgs: ...Array<any>): Promise<ReloadResult>
Parameters
storageSetArgs (...Array<any>)
Returns
Promise<ReloadResult>

Reloads the persisted Ledger from storage and replays any local changes on top of any new remote changes, if they exist.

Will return the list of new remote changes as well as a list of local changes that have not been persisted yet. This data is useful for the end user to know:

  • what changes they have yet to save
  • what new remote changes have been applied
  • if there are any inconsistencies as a result of new remote changes that conflict with the local changes (e.g. double spend)
reloadLedger(): Promise<ReloadResult>
Returns
Promise<ReloadResult>

Returns an array of ledger events that exist in ledger "a" but not in "b". An event is considered equal to another if it has the same uuid.

This will not return any events from "b" that don't exist in "a", so the order of the params matters.

Example 1:

  • Ledger A: [1, 2, 3]
  • Ledger B: [1, 3, 4, 5]
  • Returns: [2]

Example 2:

  • Ledger A: [1, 3, 4, 5]
  • Ledger B: [1, 2, 3]
  • Returns: [4, 5]
diffLedger(a: Ledger, b: Ledger): LedgerDiff
Parameters
a (Ledger)
b (Ledger)
Returns
LedgerDiff

get method loads the content specified by path in the GitHub repository.

Parameters
path (string) relative path to the content.
Returns
Promise<Uint8Array>:

A primary SourceCred API that combines the given inputs into a single WeightedGraph and then runs the CredRank algorithm on it to create a CredGraph containing the cred scores of nodes/participants.

Might mutate the ledger that is passed in.

credrank(input: CredrankInput): Promise<CredrankOutput>
Parameters
input (CredrankInput)
Returns
Promise<CredrankOutput>

Compute CredRank results given a WeightedGraph, a Ledger, and optional parameters.

credrank(weightedGraph: WeightedGraph, ledger: Ledger, personalAttributions: PersonalAttributions, markovProcessGraphParameters: $Shape<MarkovProcessGraphParameters>, pagerankOptions: $Shape<PagerankOptions>?): Promise<CredGraph>
Parameters
weightedGraph (WeightedGraph)
ledger (Ledger)
personalAttributions (PersonalAttributions = [])
markovProcessGraphParameters ($Shape<MarkovProcessGraphParameters> = {})
pagerankOptions ($Shape<PagerankOptions>?)
Returns
Promise<CredGraph>

This module defines configuration for the Dependencies system, a system which allows a project to mint excess Cred for its dependencies.

To learn about the semantics of the dependencies system, read the module docstring for core/bonusMinting.js

At a high level, this config type allows the instance maintainer to specify identities (usually PROJECT-type identities) to mint extra Cred over time, as a fraction of the baseline instance Cred.

In the future, we'll likely build a UI to manage this config. However, right now it's designed for hand-editability. Also, we really want to be able to ship a default config that adds a SourceCred account (if one doesn't already exist), activates it (if it was just created), and then flows it some Cred.

With that in mind, here's how the config works:

  • User makes a new config, specifying a name for the identity. The user does not manually write in an id.
  • The config is validated against the ledger. If the config has an id, we verify that there's a matching identity in the ledger with that name (erroring if not). If the config doesn't have an id, we check if there is an identity in the ledger with that name. If there is, we write the id into the config. If there isn't, we create a new identity with the name, activate it (if told to do so by the config), and then write the id into the config.
  • Afterwards, we save the config (which is guaranteed to have an id) back to disk.

You'll note that the state in the config is a mix of human generated (choosing the name) and automatically maintained (the id). It's something of a weird compromise, but it accomplishes the design objective of having state that's easy for humans to write by hand, but also tracks the vital information by identity id (which is immutable) rather than by name (which is re-nameable).

Note that at present, when the identity in question is re-named, the config must be manually updated to account for the rename. In the future (when the config is automatically maintained) we'll remove this requirement. (Likely we'll stop tracking the identities by name at all in the config; that's an affordance to make the files generatable by hand.)

C

The ProcessedBonusPolicy is a BonusPolicy which has been transformed so that it matches the abstractions available when we're doing raw cred computation: instead of an address, we track an index into the canonical node order, and rather than arbitrary client-provided periods, we compute the weight for each Interval.

TODO(#1686, @decentralion): Remove this once we switch to CredRank.

ProcessedBonusPolicy
Properties
nodeIndex (number)
intervalWeights ($ReadOnlyArray<number>)

Given the weights and types, produce a NodeWeightEvaluator, which assigns a numerical weight to any node.

The weights are interpreted as prefixes, i.e. a given address may match multiple weights. When this is the case, the matching weights are multiplied together. When no weights match, a default weight of 1 is returned.

We currently take the NodeTypes and use them to 'fill in' default type weights if no weight for the type's prefix is explicitly set. This is a legacy affordance; shortly we will remove the NodeTypes and require that the plugins provide the type weights when the Weights object is constructed.

nodeWeightEvaluator(weights: WeightsT): NodeWeightEvaluator
Parameters
weights (WeightsT)
Returns
NodeWeightEvaluator

Given the weights and the types, produce an EdgeWeightEvaluator, which will assign an EdgeWeight to any edge.

The edge weights are interpreted as prefix matchers, so a single edge may match zero or more EdgeWeights. The weight for the edge will be the product of all matching EdgeWeights (with 1 as the default forwards and backwards weight.)

The types are used to 'fill in' extra type weights. This is a temporary state of affairs; we will change plugins to include the type weights directly in the weights object, so that producing weight evaluators will no longer depend on having plugin declarations on hand.

edgeWeightEvaluator(weights: WeightsT): EdgeWeightEvaluator
Parameters
weights (WeightsT)
Returns
EdgeWeightEvaluator

Create an empty trie backed by the given address module.

constructor(m: AddressModule<K>)
Parameters
m (AddressModule<K>)

Add key k to this trie with value v. Return this.

add(k: K, val: V): this
Parameters
k (K)
val (V)
Returns
this

Get the values in this trie along the path to k.

More specifically, this method has the following observable behavior. Let inits be the list of all prefixes of k, ordered by length (shortest to longest). Observe that the length of inits is n + 1, where n is the number of parts of k; inits begins with the empty address and ends with k itself. Initialize the result to an empty array. For each prefix p in inits, if p was added to this trie with value v, then append v to result. Return result.

get(k: K): Array<V>
Parameters
k (K)
Returns
Array<V>

Get the last stored value v in the path to key k. Returns undefined if no value is available.

getLast(k: K): V?
Parameters
k (K)
Returns
V?

This module adds logic for imposing a Cred minting budget on a graph.

Basically, we allow specifiying a budget where nodes matching a particular address may mint at most a fixed amount of Cred per period. Since every plugin writes nodes with a distinct prefix, this may be used to specify plugin-level Cred budgets. The same mechanism could also be used to implement more finely-grained budgets, e.g. for specific node types.

IntervalLength

Type: "WEEKLY"

Given a WeightedGraph and a budget, return a new WeightedGraph which ensures that the budget constraint is satisfied.

Concretely, this means that the weights in the Graph may be reduced, as necessary, in order to bring the total minted Cred within an interval down to the budget's requirements.

applyBudget(wg: WeightedGraphT, budget: Budget): WeightedGraphT
Parameters
wg (WeightedGraphT)
budget (Budget)
Returns
WeightedGraphT

Given the WeightedGraph and the Budget, returns an array of every {addres, weight} pair where the address needs to be re-weighted in order to satisfy the budget constraint.

_computeReweighting(wg: WeightedGraphT, budget: Budget): Reweighting
Parameters
wg (WeightedGraphT)
budget (Budget)
Returns
Reweighting

Given a the time-partitioned graph, the weight evaluator, and a particular entry for the budget, return every {address, weight} pair where the corresponding address needs to be reweighted in order to satisfy this budget entry.

_reweightingForEntry(args: {evaluator: NodeWeightEvaluator, partition: GraphIntervalPartition, entry: BudgetEntry}): Reweighting
Parameters
args ({evaluator: NodeWeightEvaluator, partition: GraphIntervalPartition, entry: BudgetEntry})
Returns
Reweighting

Given a WeightedGraph and the reweighting, return a new WeightedGraph which has had its weights updated accordingly, without mutating the original WeightedGraph.

_reweightGraph(wg: WeightedGraphT, reweighting: Reweighting): WeightedGraphT
Parameters
wg (WeightedGraphT)
reweighting (Reweighting)
Returns
WeightedGraphT

Given an array of node addresses, return true if any node address is a prefix of another address.

This method runs in O(n^2). This should be fine because it's intended to be run on small arrays (~one per plugin). If this becomes a performance hotpsot, we can write a more performant version.

_anyCommonPrefixes(addresses: $ReadOnlyArray<NodeAddressT>): boolean
Parameters
addresses ($ReadOnlyArray<NodeAddressT>)
Returns
boolean

Given a MarkovProcessGraph, compute PageRank scores on it.

markovProcessGraphPagerank(mpg: MarkovProcessGraph, options: $Shape<PagerankOptions>): Promise<CredGraph>
Parameters
mpg (MarkovProcessGraph)
options ($Shape<PagerankOptions> = {})
Returns
Promise<CredGraph>

This type resembles the JSON schema for configuring personal attributions, which allows participants to attribute their cred to other participants. This feature should not be used to make cred sellable/transferable, but instead is intended to allow participants to acknowledge that a portion of their credited outputs are directly generated/supported by the labor of others. (e.g. when a contributor has a personal assistant working behind the scenes)

PersonalAttributionsConfig

Type: Array<{fromParticipantName: Name, fromParticipantId: IdentityId?, recipients: Array<{toParticipantName: Name, toParticipantId: IdentityId?, proportions: Array<{startDate: TimestampISO, decimalProportion: number}>}>}>

Adds the IdentityIds where only IdentityNames are provided, and updates names and ids to reflect the account's current identity after merging/renaming.

updatePersonalAttributionsConfig(config: PersonalAttributionsConfig, ledger: Ledger): PersonalAttributionsConfig
Parameters
Returns
PersonalAttributionsConfig

Iterates through the provided plugins, runs their contributions and identities processes, and updates the ledger with any new IdentityProposals.

Might mutate the ledger that is passed in.

contributions(input: ContributionsInput, taskReporter: TaskReporter): Promise<ContributionsOutput>
Parameters
input (ContributionsInput)
taskReporter (TaskReporter = new SilentTaskReporter())
Returns
Promise<ContributionsOutput>

This class is a lightweight utility for reporting task progress to the command line.

  • When a task is started, it's printed to the CLI with a " GO " label.
  • When it's finished, it's printed with a "DONE" label, along with the time

elapsed.

  • Tasks are tracked and represented by string id.
  • The same task id may be re-used after the first task with that id is

finished.

new LoggingTaskReporter(consoleLog: ConsoleLog?, getTime: GetTime?)
Parameters
consoleLog (ConsoleLog?)
getTime (GetTime?)

SilentTaskReporter is a task reporter that collects some information, but does not emit any visible notifications.

It can be used for testing purposes, or as a default TaskReporter for cases where we don't want to default to emitting anything to console.

Rather than emitting any messages or taking timing information, it allows retrieving the sequence of task updates that were sent to the reporter. This makes it easy for test code to verify that the TaskReporter was sent the right sequence of tasks.

Callers can also check what tasks are still active (e.g. to verify that there are no active tasks unfinished at the end of a method.)

new SilentTaskReporter()

ScopedTaskReporter is a higher-order task reporter for generating opaque scopes meant to be passed into child contexts.

In this case, a scope is a log prefix indicating the parent context in which the current task is running.

This allows for reliable filtering and searching on existing tasks by prefix. Care should be taken to ensure that the same prefix does not exist in peer task contexts, so far as error handling is concerned, or a filter may incorrectly catch and finish a still-running task while error-handling. This risk can be mitigated by only designating prefixes via a Scoped Task Reporter, as opposed to passing prefixes into the start and finish methods manually. For example, this block will always throw:

function f(top: SilentTaskReporter) { top.start("my-prefix: foo"); const scoped = new ScopedTaskReporter(top, "my-prefix"); scoped.start("foo"); // Error: task my-prefix: foo already active }

new ScopedTaskReporter(delegate: TaskReporter, prefix: string)
Parameters
delegate (TaskReporter)
prefix (string)

A primary SourceCred API that runs the CredEquate algorithm on the given inputs to create ScoredContributions containing info on the cred scores of contributions and the cred earned by participants in each contribution.

credequate(input: CredequateInput): CredequateOutput
Parameters
input (CredequateInput)
Returns
CredequateOutput

The input CredGrainView merged with information from the generated scoredContributions

credGrainView

Type: CredGrainView

Scored contributions for the dependencies. 1 per week per dependency.

scoredDependencyContributions

Type: $ReadOnlyArray<ScoredContribution>

A SourceCred API that generates ScoredContributions to give bonus cred to organizations and projects that the instance depends on or supports.

May mutate the ledger and the dependencies inputs. Will return a new CredGrainView with dependencies included.

dependencies(input: DependenciesInput): DependenciesOutput
Parameters
input (DependenciesInput)
Returns
DependenciesOutput

Iterates through the provided plugins, runs their graph and identities processes, and updates the ledger with any new IdentityProposals.

Might mutate the ledger that is passed in.

graph(input: GraphInput, scope: $ReadOnlyArray<PluginId>?, taskReporter: TaskReporter): Promise<GraphOutput>
Parameters
input (GraphInput)
scope ($ReadOnlyArray<PluginId>?)
taskReporter (TaskReporter = new SilentTaskReporter())
Returns
Promise<GraphOutput>

A class for composing ReferenceDetectors. Calls ReferenceDetectors in the order they're given in the constructor, returning the first NodeAddressT it encounters.

new CascadingReferenceDetector(refs: $ReadOnlyArray<ReferenceDetector>)
Parameters
refs ($ReadOnlyArray<ReferenceDetector>)

A primary SourceCred API that combines the given inputs into a list of grain distributions.

May mutate the ledger that is passed in.

grain(input: GrainInput): Promise<GrainOutput>
Parameters
input (GrainInput)
Returns
Promise<GrainOutput>

executeGrainIntegrationsFromGrainInput

packages/sourcecred/src/api/main/grain.js

Marshall grainInput from a Grain Configuration file for use with executeGrainIntegration function

executeGrainIntegrationsFromGrainInput(grainInput: GrainInput, ledger: Ledger, distributions: $ReadOnlyArray<Distribution>): Promise<GrainIntegrationResults>
Parameters
grainInput (GrainInput)
ledger (Ledger)
distributions ($ReadOnlyArray<Distribution>)
Returns
Promise<GrainIntegrationResults>

This function definition is implemented by Grain Integrations. Grain integrations allow distributions to be executed programmatically beyond the ledger. However, an integration might have some side-effects that require the ledger to be updated, and it therefore has the option of returning a list of of ledger operations. The ledger will update the ledger if accounting is enabled. Otherwise, grain balances will be tracked elsewhere.

GrainIntegrationFunction

Type: function (PayoutDistributions, IntegrationConfig): Promise<PayoutResult>

Return ids sorted by balance.

sortedIds(distributionBalances: DistributionBalances): $ReadOnlyArray<IdentityId>
Parameters
distributionBalances (DistributionBalances)
Returns
$ReadOnlyArray<IdentityId>

Center string in some whitespace for total length {len}.

formatCenter(str: string, len: number): string
Parameters
str (string)
len (number)
Returns
string

Given some distribution, return the total allocated to id across all allocation policies.

DistributionBalances

Type: Map<IdentityId, G.Grain>

Given DistributionBalances, return total grain distributed across participants.

getTotalDistributed(distributionBalances: DistributionBalances): G.Grain
Parameters
distributionBalances (DistributionBalances)
Returns
G.Grain

Input type for the analysis API

AnalysisInput
Properties
credGraph (CredGraph)
ledger (Ledger)
featureFlags ({neo4j: boolean?})

Output type for the analysis API

AnalysisOutput
Properties
accounts (CredAccountData)
neo4j (Neo4jOutput?)

A primary SourceCred API that transforms the given inputs into useful data analysis structures.

Parameters
input (AnalysisInput)
Returns
Promise<AnalysisOutput>

Iterators that will yield CSV strings. The CSV contents will be batched in groups for scalability. Each group will include headers. These strings can each be written to disk as a .csv file and then used to export the nodes and edges of a CredGraph into a Neo4j database using neo4j-admin.

Neo4jOutput
Properties
nodes (any)
edges (any)

Returns an array of arrays that contains all of the items in the original array parameter, but batched into arrays no larger than the batchSize. Example: batch([1,2,3,4,5], 2) = [[1,2], [3,4], [5]]

batchArray(array: $ReadOnlyArray<T>, batchSize: number): $ReadOnlyArray<$ReadOnlyArray<T>>
Parameters
array ($ReadOnlyArray<T>)
batchSize (number)
Returns
$ReadOnlyArray<$ReadOnlyArray<T>>

Returns an iterator that will stop upon reaching the batchSize, and then can be reused again for more batches. Use the provided hasNext() method to know when there are no more batches available. Example: const result = []; while (iterator.hasNext()) { for (const item of iterator) { // code to process item } // code to finalize batch }

batchIterator(iterator: (Iterator<T> | Generator<T, void, void>), batchSize: number): BatchIterator<T>
Parameters
iterator ((Iterator<T> | Generator<T, void, void>))
batchSize (number)
Returns
BatchIterator<T>

This is an Instance implementation that reads and writes using relative paths on the given base URL. The base URL given should end with a trailing slash.

new ReadInstance(storage: DataStorage)
Parameters
storage (DataStorage)

Simple read interface for inputs and outputs of the main SourceCred API.

ReadOnlyInstance
Instance Members
readGraphInput()
readContributionsInput()
readCredrankInput()
readCredequateInput()
readGrainInput()
readAnalysisInput()
readWeightedGraphForPlugin(pluginId)
readCredGraph()
readCredGrainView()
readLedger()

Simple read/write interface for inputs and outputs of the main SourceCred API.

Instance

Extends ReadOnlyInstance

Instance Members
writeGraphOutput(graphOutput, shouldZip?)
writeContributionsOutput(contributionsOutput, shouldZip?)
writeCredrankOutput(credrankOutput, shouldZip?)
writeCredequateOutput(credequateOutput, shouldZip?)
writeAnalysisOutput(analysisOutput)
writeLedger(ledger)
writeGrainIntegrationOutput(result)
updateGrainIntegrationConfig(result, config)

This module contains logic for setting Cred minting budgets over time on a per-plugin basis. As an example, suppose we want to limit the GitHub plugin to mint only 200 Cred per week, and we want the Discord plugin to mint 100 Cred per Week until Jan 1, 2020 and 200 Cred per week thereafter. We could do so with the following config:

RawPluginBudgetConfig
Properties
intervalLength (IntervalLength)
plugins ({})

This class serves as a simple wrapper for http GET requests using fetch.

new OriginStorage(base: string)
Parameters
base (string)
Instance Members
get(resource)

Data Storage allows the implementation of a uniform abstraction for I/O

DataStorage
Instance Members
get(key)

keys should be file-system friendly

WritableDataStorage

Extends DataStorage

The Balanced policy attempts to pay Grain to everyone so that their lifetime Grain payouts are consistent with their lifetime Cred scores.

We recommend use of the Balanced strategy as it takes new information into account-- for example, if a user's contributions earned little Cred in the past, but are now seen as more valuable, the Balanced policy will take this into account and pay them more, to fully appreciate their past contributions.

Balanced

Type: "BALANCED"

Allocate a fixed budget of Grain to the users who were "most underpaid".

We consider a user underpaid if they have received a smaller proportion of past earnings than their share of score. They are balanced paid if their proportion of earnings is equal to their score share, and they are overpaid if their proportion of earnings is higher than their share of the score.

We start by imagining a hypothetical world, where the entire grain supply of the project (including this allocation) was allocated according to the current scores. Based on this, we can calculate the "balanced" lifetime earnings for each participant. Usually, some will be "underpaid" (they received less than this amount) and others are "overpaid".

We can sum across all users who were underpaid to find the "total underpayment".

Now that we've calculated each actor's underpayment, and the total underpayment, we divide the allocation's grain budget across users in proportion to their underpayment.

You should use this allocation when you want to divide a fixed budget of grain across participants in a way that aligns long-term payment with total cred scores.

balancedReceipts(policy: BalancedPolicy, credGrainView: CredGrainView, effectiveTimestamp: TimestampMs): $ReadOnlyArray<GrainReceipt>
Parameters
policy (BalancedPolicy)
credGrainView (CredGrainView)
effectiveTimestamp (TimestampMs)
Returns
$ReadOnlyArray<GrainReceipt>

The NonnegativeGrain type ensures Grain amount is >= 0, which is particularly useful in the case of policy budgets or grain transfers.

NonnegativeGrain

The Immediate policy evenly distributes its Grain budget across users based on their Cred in the most recent interval.

It's used when you want to ensure that everyone gets some consistent reward for participating (even if they may be "overpaid" in a lifetime sense). We recommend using a smaller budget for the Immediate policy.

Immediate

Type: "IMMEDIATE"

Split a grain budget in proportion to the cred scores in the most recent time interval, with the option to extend the interval to include the last {numIntervalsLookback} weeks.

immediateReceipts(policy: ImmediatePolicy, credGrainView: CredGrainView, effectiveTimestamp: TimestampMs): $ReadOnlyArray<GrainReceipt>
Parameters
policy (ImmediatePolicy)
credGrainView (CredGrainView)
effectiveTimestamp (TimestampMs)
Returns
$ReadOnlyArray<GrainReceipt>

The Recent policy distributes cred using a time discount factor, weighing recent contributions higher. The policy takes a history of cred scores, progressively discounting past cred scores, and then taking the sum over the discounted scores.

A cred score at time t reads as follows: "The discounted cred c' at a timestep which is n timesteps back from the most recent one is its cred score c multiplied by the discount factor to the nth power."

c' = c * (1 - discount) ** n

Discounts range from 0 to 1, with a higher discount weighing recent contribution higher.

Note that this is a generalization of the Immediate policy, where Immediate is the same as Recent with a full discount, i.e. a discount factor 1 (100%).

Recent

Type: "RECENT"

Split a grain budget based on exponentially weighted recent cred.

recentReceipts(policy: RecentPolicy, credGrainView: CredGrainView, effectiveTimestamp: TimestampMs): $ReadOnlyArray<GrainReceipt>
Parameters
policy (RecentPolicy)
credGrainView (CredGrainView)
effectiveTimestamp (TimestampMs)
Returns
$ReadOnlyArray<GrainReceipt>

The Special policy is a power-maintainer tool for directly paying Grain to a target identity. I'm including it because we will use it to create "initialization" payouts to contributors with prior Grain balances in our old ledger.

This has potential for abuse, I don't recommend making it easy to make special payouts from the UI, since it subverts the "Grain comes from Cred" model.

Special

Type: "SPECIAL"

This is an writable Instance implementation that reads and writes using relative paths.

new WriteInstance(writableDataStorage: WritableDataStorage)

Extends ReadInstance

Parameters
writableDataStorage (WritableDataStorage)
Instance Members
updateGrainIntegrationConfig(result, config)

This is an Instance implementation that reads and writes using relative paths on the local disk.

new LocalInstance(baseDirectory: string)

Extends WriteInstance

Parameters
baseDirectory (string)

Make a directory, if it doesn't already exist.

mkdirx(path: string)
Parameters
path (string)

Check if a directory is empty

Will error if a path that resolves to anything other than a directory is provided

isDirEmpty(dirPath: string): boolean
Parameters
dirPath (string)
Returns
boolean

Disk Storage abstracts away low-level file I/O operations.

new DiskStorage(basePath: string)
Parameters
basePath (string)
Instance Members
get(path)
set(path, contents)
_checkPathPrefix(path)

Normalize the given POSIX path, resolving ".." and "." segments.

When multiple, sequential forward slashes are found, they are replaced by a single forward slash. A trailing forward slash is preserved if present, but not added if absent.

If the path is a zero-length string, "." is returned, representing the current working directory.

A TypeError is thrown if path is not a string.

normalize(path: string): string
Parameters
path (string)
Returns
string

Returns an object mapping owner-name pairs to CLI plugin declarations; keys are like sourcecred/github.

getPlugin(pluginId: PluginId): Plugin?
Parameters
pluginId (PluginId)
Returns
Plugin?

A PluginId uniquely identifies a Plugin.

Each PluginId takes a owner/name format, as in sourcecred/github.

PluginIds are canonically lower-case.

PluginId

Load and parse a JSON file from DataStorage.

If the file cannot be read, then an error is thrown. If parsing fails, an error is thrown.

loadJson(storage: DataStorage, path: string, parser: P.Parser<T>): Promise<T>
Parameters
storage (DataStorage)
path (string)
parser (P.Parser<T>)
Returns
Promise<T>

Load and parse a JSON file from DataStorage, with a default to use if the file is not found.

This is intended as a convenience for situations where the user may optionally provide configuration in a json file.

The default must be provided as a function that returns a default, to accommodate situations where the object may be mutable, or where constructing the default may be expensive.

If no file is present at that location, then the default constructor is invoked to create a default value, and that is returned.

If attempting to load the file fails for any reason other than ENOENT or a 404 (e.g. the path actually is a directory), then the error is thrown.

If parsing fails, an error is thrown.

loadJsonWithDefault(storage: DataStorage, path: string, parser: P.Parser<T>, def: function (): T): Promise<T>
Parameters
storage (DataStorage)
path (string)
parser (P.Parser<T>)
def (function (): T)
Returns
Promise<T>

Read a text file from DataStorage, with a default string value to use if the file is not found. The file is read in the default encoding, UTF-8.

This is intended as a convenience for situations where the user may optionally provide configuration in a non-JSON file saved to disk.

The default must be provided as a function that returns a default, in case constructing the default may be expensive.

If no file is present at that location, then the default constructor is invoked to create a default value, and that is returned.

If attempting to load the file fails for any reason other than ENOENT or a 404 (e.g. the path actually is a directory), then the error is thrown.

loadFileWithDefault(storage: DataStorage, path: string, def: function (): string): Promise<string>
Parameters
storage (DataStorage)
path (string)
def (function (): string)
Returns
Promise<string>

Retrieve previously scraped data for a GitHub repo from cache.

Note: the GithubToken requirement is planned to be removed. See https://github.com/sourcecred/sourcecred/issues/1580

fetchGithubRepoFromCache(repoId: RepoId, token: GithubToken): Promise<Repository>
Parameters
repoId (RepoId) the GitHub repository to retrieve from cache
token (GithubToken) authentication token to be used for the GitHub API; generate a token at: https://github.com/settings/tokens
Name Description
token.token any
token.cache any
Returns
Promise<Repository>: a promise that resolves to a JSON object containing the data scraped from the repository, with data format to be specified later

Scrape data from a GitHub repo using the GitHub API.

fetchGithubRepo(repoId: RepoId, token: GithubToken): Promise<object>
Parameters
repoId (RepoId) the GitHub repository to be scraped
token (GithubToken) authentication token to be used for the GitHub API; generate a token at: https://github.com/settings/tokens
Name Description
token.token any
token.cache any
Returns
Promise<object>: a promise that resolves to a JSON object containing the data scraped from the repository, with data format to be specified later

Determine the instant at which our GitHub quota will refresh.

The returned promise may reject with a GithubResponseError or string error message.

quotaRefreshAt(token: GithubToken): Promise<Date>
Parameters
token (GithubToken)
Returns
Promise<Date>

Given a resetAt date response from GitHub, determine the actual date until which we want to wait. We clamp to a reasonable range and apply some padding.

_resolveRefreshTime(now: Date, nominalRefreshTime: Date): Date
Parameters
now (Date)
nominalRefreshTime (Date)
Returns
Date

A local mirror of a subset of a GraphQL database.

Clients should interact with this module as follows:

  • Invoke the constructor to acquire a Mirror instance.
  • Invoke registerObject to register a root object of interest.
  • Invoke update to update all transitive dependencies.
  • Invoke extract to retrieve the data in structured form.

See the relevant methods for documentation.

new Mirror(db: Database, schema: Schema.Schema, options: {blacklistedIds: $ReadOnlyArray<Schema.ObjectId>?, guessTypename: function (Schema.ObjectId): (Schema.Typename | null)?}?)
Parameters
db (Database)
schema (Schema.Schema)
options ({blacklistedIds: $ReadOnlyArray<Schema.ObjectId>?, guessTypename: function (Schema.ObjectId): (Schema.Typename | null)?}?)
Instance Members
_initialize()
_createUpdate(updateTimestamp)
registerObject(object)
_nontransactionallyRegisterObject(object, guessMismatchMessage?)
_nontransactionallyRegisterNodeFieldResult(result, context)
_findOutdated(since)
_isUpToDate(queryPlan)
_queryFromPlan(queryPlan, options)
_updateData(updateId, queryResult)
_nontransactionallyUpdateData(updateId, queryResult)
_logRequest(body, variables, timestamp)
_logResponse(rowid, jsonResponse, timestamp)
_logRequestUpdateId(rowid, updateId)
_updateStep(postQuery, options)
update(postQuery, options)
_queryShallow(typename, fidelity)
_getEndCursor(objectId, fieldname)
_queryConnection(typename, fieldname, endCursor, connectionPageSize)
_updateConnection(updateId, objectId, fieldname, queryResult)
_nontransactionallyUpdateConnection(updateId, objectId, fieldname, queryResult)
_queryOwnData(typename)
_updateOwnData(updateId, queryResult)
_nontransactionallyUpdateOwnData(updateId, queryResult)
_queryTypename()
_updateTypenames(queryResult)
_nontransactionallyUpdateTypenames(queryResult)
extract(rootId)

Mirrors data from the Discourse API into a local sqlite db.

This class allows us to persist a local copy of data from a Discourse instance. We have it for reasons similar to why we have a GraphQL mirror for GitHub; it allows us to avoid re-doing expensive IO every time we re-load SourceCred. It also gives us robustness in the face of network failures (we can keep however much we downloaded until the fault).

As implemented, the Mirror will never update already-downloaded content, meaning it will not catch edits or deletions. As such, it's advisable to replace the cache periodically (perhaps once a week or month). We may implement automatic cache invalidation in the future.

Each Mirror instance is tied to a particular server. Trying to use a mirror for multiple Discourse servers is not permitted; use separate Mirrors.

new Mirror(repo: MirrorRepository, fetcher: Discourse, serverUrl: string, options: $Shape<MirrorOptions>?)
Parameters
repo (MirrorRepository)
fetcher (Discourse)
serverUrl (string)
options ($Shape<MirrorOptions>?)
Instance Members
_initialize()
_createUpdate(updateTimestamp)
registerObject(object)
_nontransactionallyRegisterObject(object, guessMismatchMessage?)
_nontransactionallyRegisterNodeFieldResult(result, context)
_findOutdated(since)
_isUpToDate(queryPlan)
_queryFromPlan(queryPlan, options)
_updateData(updateId, queryResult)
_nontransactionallyUpdateData(updateId, queryResult)
_logRequest(body, variables, timestamp)
_logResponse(rowid, jsonResponse, timestamp)
_logRequestUpdateId(rowid, updateId)
_updateStep(postQuery, options)
update(postQuery, options)
_queryShallow(typename, fidelity)
_getEndCursor(objectId, fieldname)
_queryConnection(typename, fieldname, endCursor, connectionPageSize)
_updateConnection(updateId, objectId, fieldname, queryResult)
_nontransactionallyUpdateConnection(updateId, objectId, fieldname, queryResult)
_queryOwnData(typename)
_updateOwnData(updateId, queryResult)
_nontransactionallyUpdateOwnData(updateId, queryResult)
_queryTypename()
_updateTypenames(queryResult)
_nontransactionallyUpdateTypenames(queryResult)
extract(rootId)

Decomposition of a schema, grouping types by their kind (object vs. union) and object fields by their kind (primitive vs. link vs. connection).

All arrays contain elements in arbitrary order.

SchemaInfo
Properties
objectTypes ({})
unionTypes ({})

A set of objects and connections that should be updated.

QueryPlan
Properties
typenames ($ReadOnlyArray<Schema.ObjectId>)
objects ($ReadOnlyArray<{typename: Schema.Typename, id: Schema.ObjectId}>)
connections ($ReadOnlyArray<{objectTypename: Schema.Typename, objectId: Schema.ObjectId, fieldname: Schema.Fieldname, endCursor: (EndCursor | void)}>)

An endCursor of a GraphQL pageInfo object, denoting where the cursor should continue reading the next page. This is null when the cursor is at the beginning of the connection (i.e., when the connection is empty, or when first: 0 is provided).

EndCursor

Type: (string | null)

Result describing only the typename of a set of nodes. Used when we only have references to nodes via unfaithful fields.

TypenamesUpdateResult

Type: $ReadOnlyArray<{__typename: Schema.Typename, id: Schema.ObjectId}>

Result describing own-data for many nodes of a given type. Whether a value is a PrimitiveResult or a NodeFieldResult is determined by the schema.

This type would be exact but for facebook/flow#2977, et al.

OwnDataUpdateResult

Type: $ReadOnlyArray<{__typename: Schema.Typename, id: Schema.ObjectId}>

Result describing new elements for connections on a single node.

This type would be exact but for facebook/flow#2977, et al.

NodeConnectionsUpdateResult
Properties
id (Schema.ObjectId)

Result describing all kinds of updates. Each key's prefix determines what type of results the corresponding value represents (see constants below). No field prefix is a prefix of another, so this characterization is complete.

This type would be exact but for facebook/flow#2977, et al.

See: _FIELD_PREFIXES.

UpdateResult

A key of an UpdateResult has this prefix if and only if the corresponding value represents TypenamesUpdateResults.

TYPENAMES

A key of an UpdateResult has this prefix if and only if the corresponding value represents OwnDataUpdateResults.

OWN_DATA

A key of an UpdateResult has this prefix if and only if the corresponding value represents NodeConnectionsUpdateResults.

NODE_CONNECTIONS

Convert a prepared statement into a JS function that executes that statement and asserts that it makes exactly one change to the database.

The prepared statement must use only named parameters, not positional parameters.

The prepared statement must not return data (e.g., INSERT and UPDATE are okay; SELECT is not).

The statement is not executed inside an additional transaction, so in the case that the assertion fails, the effects of the statement are not rolled back by this function.

This is useful when the statement is like UPDATE ... WHERE id = ? and it is assumed that id is a primary key for a record already exists---if either existence or uniqueness fails, this method will raise an error quickly instead of leading to a corrupt state.

For example, this code...

const setName: ({|+userId: string, +newName: string|}) => void =
  _makeSingleUpdateFunction(
    "UPDATE users SET name = :newName WHERE id = :userId"
  );
setName({userId: "user:foo", newName: "The Magnificent Foo"});

...will update user:foo's name, or throw an error if there is no such user or if multiple users have this ID.

_makeSingleUpdateFunction(stmt: Statement): function (Args): void
Parameters
stmt (Statement)
Returns
function (Args): void

GraphQL structured query data format.

Main module exports:

  • lots of types for various GraphQL language constructs
  • the build object, providing a fluent builder API
  • the stringify object, and particularly stringify.body
  • the two layout strategies multilineLayout and inlineLayout
Body

Type: Array<Definition>

A strategy for stringifying a sequence of GraphQL language tokens.

LayoutStrategy

Create a layout strategy that lays out text over multiple lines, indenting with the given tab string (such as "\t" or " ").

multilineLayout(tab: string): LayoutStrategy
Parameters
tab (string)
Returns
LayoutStrategy

Create a layout strategy that lays out all text on one line.

inlineLayout(): LayoutStrategy
Returns
LayoutStrategy

Data types to describe a particular subset of GraphQL schemata. Schemata represented by this module must satisfy these constraints:

  • Every object must have an id field of primitive type.
  • Every field of an object must be either a primitive, a reference to a single (possibly nullable) object, or a connection as described in the Relay cursor connections specification. In particular, no field may directly contain a list.
  • Interface types must be represented as unions of all their implementations.
Typename

Type: string

A derived ID to reference a cache layer.

CacheId

Derives the CacheId for a RepoId.

Returned CacheIds will be:

  • Deterministic
  • Unique for this plugin
  • Lowercase
  • Safe to use for filenames
  • Distinct for semantically distinct inputs (input IDs that differ only in case may map to the same output, because GitHub does not permit collisions-modulo-case)
cacheIdForRepoId(repoId: RepoId): CacheId
Parameters
repoId (RepoId)
Returns
CacheId

Run a retryable operation until it terminates or exhausts its retry policy. If attempt ever rejects, this function also immediately rejects with the same value.

retry(attempt: function (): Promise<AttemptOutcome<T, E>>, policy: $Shape<RetryPolicy>?, io: Io): Promise<Result<T, E>>
Parameters
attempt (function (): Promise<AttemptOutcome<T, E>>)
policy ($Shape<RetryPolicy>?)
io (Io = realIo)
Returns
Promise<Result<T, E>>

Mutate the RelationalView, by replacing all of the post bodies with empty strings. Usage of this method is a convenient hack to save space, as we don't currently use the bodies after the _addReferences step. Also removes commit messages.

compressByRemovingBody()

Creates a Map<URL, NodeAddressT> for each ReferentEntity in this view. Note: duplicates are accepted within one view. However for any URL, the corresponding N.RawAddress should be the same, or we'll throw an error.

urlReferenceMap(): Map<URL, NodeAddressT>
Returns
Map<URL, NodeAddressT>

Invoke this function when the GitHub GraphQL schema docs indicate that a connection provides a list of nullable nodes, but we expect them all to always be non-null.

This will drop any null elements from the provided list, issuing a warning to stderr if nulls are found.

expectAllNonNull(context: {__typename: string, id: string}, fieldname: string, xs: $ReadOnlyArray<(null | T)>): Array<T>
Parameters
context ({__typename: string, id: string})
fieldname (string)
xs ($ReadOnlyArray<(null | T)>)
Returns
Array<T>

Parse GitHub references from a Markdown document, such as an issue or comment body. This will include references that span multiple lines (across softbreaks), and exclude references that occur within code blocks.

parseReferences(body: string): Array<ParsedReference>
Parameters
body (string)
Returns
Array<ParsedReference>

Extract maximal contiguous blocks of text from a Markdown string, in source-appearance order.

For the purposes of this method, code (of both the inline and block varieties) is not considered text, and will not be included in the output at all. HTML contents are similarly excluded.

Normal text, emphasized/strong text, link text, and image alt text all count as text and will be included. A block of text is not required to have the same formatting: e.g., the Markdown document given by hello *there* [you](https://example.com) without the backticks has one contiguous block of text: "hello there you".

Softbreaks count as normal text, and render as a single space. Hardbreaks break a contiguous block of text.

Block-level elements, such as paragraphs, lists, and block quotes, break contiguous blocks of text.

See test cases for examples.

textBlocks(string: string): Array<string>
Parameters
string (string)
Returns
Array<string>

Builds a GithubReferenceDetector using multiple RelationalView. As RelationalView should only be used for one repository at a time, you will commonly want to compose several of them into one GithubReferenceDetector.

Note: duplicates are normally expected. However for any URL, the corresponding NodeAddressT should be the same, or we'll throw an error.

fromRelationalViews(views: $ReadOnlyArray<RelationalView>): GithubReferenceDetector
Parameters
views ($ReadOnlyArray<RelationalView>)
Returns
GithubReferenceDetector

A reference detector which uses a pregenerated Map<URL, NodeAddressT> as a lookup table.

Note: this is sensitive to canonicalization issues because it's based on string comparisons. For example:

new MappedReferenceDetector(map: Map<URL, NodeAddressT>)
Parameters
map (Map<URL, NodeAddressT>)

A ReferenceDetector which takes a base ReferenceDetector and applies a translate function to any results.

new TranslatingReferenceDetector(base: ReferenceDetector, translate: TranslateFunction)
Parameters
base (ReferenceDetector)
translate (TranslateFunction)

Validates a token against know formatting. Throws an error if it appears invalid.

Personal access token https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line

Installation access token https://developer.github.com/v3/apps/#create-a-new-installation-token

validateToken(token: string): GithubToken
Parameters
token (string)
Returns
GithubToken

The serialized form of the Discord config. If you are editing a config.json file, it should match this type.

TODO: This type is kind of disorganized. It would be cleaner to have all the weight configuration in single optional sub-object, I think. Consider cleaning up before 0.8.0.

DiscordConfigJson

Type: $ReadOnlyArray<{guildId: Model.Snowflake, reactionWeightConfig: ReactionWeightConfig?, roleWeightConfig: RoleWeightConfig?, channelWeightConfig: ChannelWeightConfig?, propsChannels: $ReadOnlyArray<Model.Snowflake>?, includeNsfwChannels: boolean?, simplifyGraph: boolean?, beginningDate: TimestampISO?}>

Upgrade from the version on disk to the DiscordConfig.

For now, this allows us to refactor to a cleaner internal type without breaking any existing users. We may need this indefinitely if e.g. we decide to de-serialize the raw JSON into maps (since maps can't be written directly to JSON).

_upgrade(json: DiscordConfigJson): DiscordConfigs
Parameters
Returns
DiscordConfigs

All of the information necessary to add a message to the graph, along with its reactions and its mentions.

GraphMessage
Properties
message (Model.Message)
author ((Model.GuildMember | null))
reactions ($ReadOnlyArray<GraphReaction>)
mentions ($ReadOnlyArray<GraphMention>)
channelId (Model.Snowflake)
channelName (string)
channelParentId (Model.Snowflake?)

Find all of the messages that should go into the graph. This will deliberately ignore messages that have no reactions, since they have no Cred impact and don't need to go into the graph.

findGraphMessages(repo: SqliteMirrorRepository): void
Parameters
repo (SqliteMirrorRepository)
Returns
void

TopicHasLikedPost edges connect a Topic to the posts in that topic that were liked, in proportion to the total like weight of the post in question.

See: https://github.com/sourcecred/sourcecred/issues/1896

TopicHasLikedPost
Properties
edge (Edge)
weight (number)

Parse the links from a Discourse post's cookedHtml, generating an array of UrlStrings. All of the UrlStrings will contain the full server URL (i.e. relative references are made absolute). The serverUrl is required so that we can do this.

parseLinks(cookedHtml: string, serverUrl: string): Array<UrlString>
Parameters
cookedHtml (string)
serverUrl (string)
Returns
Array<UrlString>

Tags can be configured to confer specific like-weight multipliers when added to a Topic. If a tag does not have a configured weight, the defaultWeight is applied. An example configuration might look like:

"weights": {
  "defaultTagWeight": 1,
  "tagWeights": {
    "foo": 0,
    "bar": 1.25,
    "baz": 2
  }
  // categoryWeight configs...
}

where foo and bar are the names of tags used in discourse.

When multiple tags are assigned to a topic, their weights are multiplied together to yield a total tag Weight multiplier. In our example configuration, if both foo and bar are added to a topic, likes on posts in the topic will have a weight of 0, (0 * 1.25 = 0), which means that no cred will be minted by those likes.

If "bar" and "baz" are both added to another topic, the likes on all posts in that topic will carry a weight of 2.5 (1.25 * 2 = 2.5), which means that 2.5x as much cred will be minted by those likes.

defaultTagWeight

Type: NodeWeight

Categories can be configured to confer a specific like-weight multiplier when added to a Topic. If a category does not have a configured weight, the defaultWeight is applied. An example configuration might look like:

weights: {
 "defaultCategoryWeight": 1,
 "categoryWeights": {
   "5": 0,
   "36": 1.25
 }
 // tagWeight configs...
}

where "5" and "36" are the categoryIds in discourse.

An easy way to find the categoryId for a given category is to browse to the categories section in discourse (e.g. https://discourse.sourcecred.io/categories). Then mousing over or clicking on a category will bring you to a url that has the shape https://exampleUrl.com/c// Clicking on the community category in sourcecred navigates to https://discourse.sourcecred.io/c/community/26 for example, where the categoryId is 26

defaultCategoryWeight

Type: NodeWeight

An interface for reading the local Discourse data.

ReadRepository
Instance Members
topics()
posts()
findPostInTopic(topicId, indexWithinTopic)
users()
likes()
findUser(username)
topicById(id)
postById(id)

The timestamp of the last time we've loaded category definition topics. Used for determining whether we should reload them during an update.

definitionCheckMs

Type: number

The most recent bumpedMs timestamp of all topics. Used for determining what the most recent topic changes we have stored in our local mirror, and which we should fetch from the API during update.

topicBumpMs

Type: number

For the given topic ID, retrieves the bumpedMs value. Returns null, when the topic wasn't found.

bumpedMsForTopic(id: TopicId): (number | null)
Parameters
id (TopicId)
Returns
(number | null)

Finds the SyncHeads values, used as input to skip already up-to-date content when mirroring.

syncHeads(): SyncHeads
Returns
SyncHeads

Idempotent insert/replace of a Topic, including all it's Posts.

Note: this will insert new posts, update existing posts and delete old posts. As these are separate queries, we use a transaction here.

replaceTopicTransaction(topic: Topic, posts: $ReadOnlyArray<Post>): void
Parameters
topic (Topic)
posts ($ReadOnlyArray<Post>)
Returns
void

Bumps the definitionCheckMs (from SyncHeads) to the provided timestamp.

bumpDefinitionTopicCheck(timestampMs: TimestampMs): void
Parameters
timestampMs (TimestampMs)
Returns
void

Class for retrieving data from the Discourse API.

The Discourse API implements the JSON endpoints for all functionality of the actual site. As such, it tends to return a lot of information that we don't care about (in contrast to a GraphQL API which would give us only what we ask for). As such, we implement a simple interface over it, which both abstracts over calling the API, and does some post-processing on the results to simplify it to data that is relevant for us.

fetch

The "view" received from the Discourse API when getting a topic by ID.

This filters some relevant data like bumpedMs, and the type separation makes this distinction clear.

TopicView
Properties
id (TopicId)
categoryId (CategoryId)
tags ($ReadOnlyArray<Tag>)
title (string)
timestampMs (TimestampMs)
authorUsername (string)

The "latest" format Topic from the Discourse API when getting a list of sorted topics.

This filters relevant data like authorUsername, and the type separation makes this distinction clear.

TopicLatest
Properties
id (TopicId)
categoryId (CategoryId)
tags ($ReadOnlyArray<Tag>)
title (string)
timestampMs (TimestampMs)
bumpedMs (number)

A complete Topic object.

Topic
Properties
(any)
(any)

Interface over the external Discourse API, structured to suit our particular needs. We have an interface (as opposed to just an implementation) to enable easy mocking and testing.

Discourse
Instance Members
likesByUser(targetUsername, offset)
topicsBumpedSince(sinceMs)

Parses a "latest" topic.

A "latest" topic, is a topic as returned by the /latest.json API call, and has a distinct assumptions:

  • bumped_at is always present.

usernamesById map used to resolve these IDs to usernames.

parseLatestTopic(json: any): TopicLatest
Parameters
json (any)
Returns
TopicLatest

Discourse ReferenceDetector detector that relies on database lookups.

new DiscourseReferenceDetector(data: ReadRepository)
Parameters

An intermediate representation of an Initiative.

This makes the assumption a Champion cannot fail in championing. Instead of a success status, they should be removed if unsuccessful.

There is also no timestamp for completion or each edge. It should be inferred from the node timestamps instead. We can support accurate edge timestamps by interpreting wiki histories. However the additional complexity and requirements put on the tracker don't seem worthwhile right now. Especially because cred can flow even before bounties are released. See https://discourse.sourcecred.io/t/write-the-initiatives-plugin/269/6

Initiative
Properties
id (InitiativeId)
title (string)
timestampMs (TimestampMs)
weight (InitiativeWeight?)
completed (boolean)
dependencies (EdgeSpec)
references (EdgeSpec)
contributions (EdgeSpec)
champions ($ReadOnlyArray<URL>)

Represents a source of Initiatives.

InitiativeRepository
Instance Members
initiatives()

Represents an "inline contribution" node. They're called entries and named by type: contribution entry, reference entry, dependency entry. The generalization of this is a node entry.

NodeEntryField

Type: ("DEPENDENCY" | "REFERENCE" | "CONTRIBUTION")

Takes a NodeEntryJson and normalizes it to a NodeEntry.

Will throw when required fields are missing. Otherwise handles default values and converting ISO timestamps.

normalizeNodeEntry(input: NodeEntryJson, defaultTimestampMs: TimestampMs): NodeEntry
Parameters
input (NodeEntryJson)
defaultTimestampMs (TimestampMs)
Returns
NodeEntry

Creates a url-friendly-slug from the title of a NodeEntry. Useful for generating a default key.

Note: keys are not required to meet the formatting rules of this slug, this is mostly for predictability and convenience of NodeAddresses.

_titleSlug(title: string): string
Parameters
title (string)
Returns
string

Represents a single Initiative using a file as source.

Note: The file name will be used to derive the InitiativeId. So it doesn't make sense to use this outside of the context of an InitiativesDirectory.

InitiativeFile

Type: InitiativeFileV020

When provided with the initiative NodeAddressT of an InitiativeFile this extracts the URL from it. Or null when the address is not for an InitiativeFile.

initiativeFileURL(address: NodeAddressT): (string | null)
Parameters
address (NodeAddressT)
Returns
(string | null)

Represents an Initiatives directory.

Initiative directories contain a set of InitiativeFiles in a *.json pattern. Where the file name is the ID of that Initiative. Additionally we require a remoteUrl for this directory. We expect this directory to be something you can browse online. This allows us to create a ReferenceDetector.

InitiativesDirectory
Properties
localPath (string)
remoteUrl (string)

Opaque because we only want this file's functions to create these load results. However we do allow anyone to consume it's properties.

LoadedInitiativesDirectory

Loads a given InitiativesDirectory.

Parameters
Returns
Promise<LoadedInitiativesDirectory>

A type which supports multiple ways of defining what edges an Initiative has. Currently includes reference detected URLs and NodeEntries. This is the normalized variant of EdgeSpecJson.

EdgeSpec
Properties
urls ($ReadOnlyArray<URL>)
entries ($ReadOnlyArray<NodeEntry>)

Takes an EdgeSpecJson and normalizes it to an EdgeSpec.

Will throw when required fields are missing or duplicate keys are found. Otherwise handles default values and converting ISO timestamps. Note: we allow the EdgeSpecJson to be undefined to easily support omitting edges entirely, while still normalizing to an EdgeSpec.

normalizeEdgeSpec(spec: EdgeSpecJson?, defaultTimestampMs: TimestampMs): EdgeSpec
Parameters
spec (EdgeSpecJson?)
defaultTimestampMs (TimestampMs)
Returns
EdgeSpec

A separate function to validate an EdgeSpec after it's normalized. Normally you don't need to invoke this directly.

_validateEdgeSpec(spec: EdgeSpec): EdgeSpec
Parameters
spec (EdgeSpec)
Returns
EdgeSpec

Find the NodeEntries which have a duplicate key.

_findDuplicatesByKey(entries: $ReadOnlyArray<NodeEntry>): Set<NodeEntry>
Parameters
entries ($ReadOnlyArray<NodeEntry>)
Returns
Set<NodeEntry>

Finds elements in the array which are included twice or more. Uses a === comparison, not deep equality.

findDuplicates(items: $ReadOnlyArray<T>): Set<T>
Parameters
items ($ReadOnlyArray<T>)
Returns
Set<T>