Skip to content

rank Tag

Overview

The rank tag is used in Trusted Assertion events to convey computed ranking scores for various subjects. It provides a normalized way for service providers to express quality, popularity, or reputation metrics about users, events, addressable events, and external identifiers.

Specification

PropertyValue
Tag Namerank
Defined inNIP-85: Trusted Assertions
Format["rank", "<score>"]

Parameters

PositionNameDescriptionFormatRequired
0Tag nameAlways "rank"stringYes
1ScoreRanking score normalized to 0-100 scalestring (integer)Yes

Usage Context

The rank tag appears in the following event kinds:

  • Kind 30382 (User Trusted Assertions): User reputation/quality ranking
  • Kind 30383 (Event Trusted Assertions): Event popularity/quality ranking
  • Kind 30384 (Addressable Event Trusted Assertions): Addressable content ranking
  • Kind 30385 (External Identifier Trusted Assertions): External entity ranking

Format Details

Score Values

  • Range: 0-100 (integer values)
  • 0: Lowest possible ranking
  • 100: Highest possible ranking
  • Normalization: All service providers should normalize their scores to this 0-100 range for consistency

String Representation

  • Scores are represented as string integers in the tag value
  • Examples: "0", "50", "89", "100"

Client Behavior

Clients should:

  1. Display Rankings:

    • Present ranking scores in user interfaces where appropriate
    • Consider visual representations (stars, bars, percentages)
    • Indicate the source service provider for transparency
  2. Score Interpretation:

    • Treat scores as relative rankings within each service provider's algorithm
    • Handle different ranking methodologies from different providers
    • Allow users to compare rankings from multiple providers
  3. Validation:

    • Verify scores are within the 0-100 range
    • Handle invalid or out-of-range values gracefully
    • Validate that scores are numeric strings

Service Provider Behavior

Service providers should:

  1. Score Generation:

    • Normalize all ranking calculations to the 0-100 scale
    • Ensure consistent scoring methodology across their algorithm
    • Update rankings only when actual changes occur
  2. Algorithm Transparency:

    • Document their ranking methodology in kind 0 metadata events
    • Use separate service keys for different ranking algorithms
    • Provide clear explanations of what their ranking represents

Use Cases

User Rankings:

  • Web of Trust reputation scores
  • Influence or authority metrics
  • Community standing indicators
  • Spam/quality detection scores

Event Rankings:

  • Content quality assessments
  • Engagement-based popularity scores
  • Relevance or importance rankings
  • Virality or trending indicators

Content Rankings:

  • Article or long-form content quality
  • Repository or project popularity
  • Resource value assessments
  • Community curation scores

External Entity Rankings:

  • Website trustworthiness scores
  • Book or media ratings
  • Location or business rankings
  • Product or service quality scores

Examples

User Ranking

json
{
  "kind": 30382,
  "tags": [
    ["d", "e88a691e98d9987c964521dff60025f60700378a4879180dcbbb4a5027850411"],
    ["rank", "89"]
  ],
  "content": "",
  "sig": "..."
}

Event Ranking

json
{
  "kind": 30383,
  "tags": [
    ["d", "b3e392b11f5d4f28321cedd09303a748acfd0487aea5a7450b3481c60b6e4f87"],
    ["rank", "92"]
  ],
  "content": "",
  "sig": "..."
}

External Entity Ranking

json
{
  "kind": 30385,
  "tags": [
    ["d", "isbn:978-0-321-35668-3"],
    ["k", "book"],
    ["rank", "85"]
  ],
  "content": "",
  "sig": "..."
}
  • Metric Tags: Used alongside rank in trusted assertions

    • followers, post_cnt, reply_cnt (user metrics)
    • comment_cnt, reaction_cnt, zap_cnt (engagement metrics)
    • zap_amount, zap_avg_amt_day_recd (economic metrics)
  • Identifier Tags:

    • d tag (identifies the subject being ranked)
    • k tag (specifies external identifier type in kind 30385)

References

Notes

  • The rank tag provides a standardized way to compare quality assessments across different service providers and algorithms.
  • Service providers should document their ranking methodology to help users understand what the scores represent.
  • Multiple service providers can rank the same subject, allowing users to compare different algorithmic approaches.
  • Rankings are subjective and depend on the service provider's algorithm and data sources.
  • Clients should clearly indicate which service provider generated each ranking score.