markxus

package module
v0.1.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 1, 2025 License: MIT Imports: 6 Imported by: 4

README

Markxus

Custom badge Google Gemini Markdown Obsidian

usage-demo

The GIF is fast-forwarded

Markxus is an LLM-powered markdown converter targeted for Nexus Mods's mod page. By default, Markxus is opinionated and it converts nexusmods HTML + BBCode into markdown that is catered for Obsidian markdown format. The cli package will use resty as its HTTP client and google-generative-ai as its LLM client.

This package provides built-in supports for both Resty (./resty) and Google Generative AI (./genai).

However, the main package by design is client-agnostic and any combination of HTTP and LLM client can be used, but a dedicated cli package must be created instead to make use of these interfaces.

Obsidian markdown

obsidian

The default markdown header format contains frontmatter and only a handful number of editors are capable of making use of these data. Obsidian is prominently known its for features and its rich community support. Obsidian by default support showing and editing frontmatter. Obsidian community also has many plugins that can handle these data.

Supported plugins:

More support to be added later.

Background

Recently, I tried to seriously rebuild my Skyrim mod list for the latest version. Obsidian is very handy in organizing pile ton of mods information, and with its graph view, it helps a lot in quickly finding relevant mods, compatibility with each other, etc. However, I dont have that much time to index every mods I use to Obsidian, so here we are.

obsidian-graph

Now, LLM is needed in this case since I need to annotate all cross-mod references with [[Obsidian internal linking]] and LLM is intelligent enough to guess whether some tokens are actually a mention or title of another mod. Nexusmods page conversion to markdown is also not that straightforward since it contains mix of HTML and BBCode, so relying on LLM to intuit the correct tag usage is the best choice here.

For now, this package can only support Google generative AI since that is my only choice at the moment.

CLI Usage

Installation

Go is needed to install the CLI. I will make an executable build pipeline later if I feel like it.

go install github.com/slainless/markxus/cli/markxus@latest
Configuration

Most of the functionality of the apps relies on API key of Nexus Mods and Google Generative AI so it is important to set up global configuration (or local configuration) or both vars will need to be supplied to the binary per usage.

To initialize global (or local) configuration, run:

markxus config init

It will create .markxus.yaml to user directory (or cwd, depending on config type). It will also prompt you with both API key and store it to OS keyring. It is recommended to store the API key in OS-managed credential storage rather than disk.

Edit command can also be used to initialize the file then directly open file editor against the config, by running:

markxus config edit -i

To change API key later without going through config init, run:

markxus config set NEXUS_API_KEY <new_key>

or

markxus config set GEN_AI_API_KEY <new_key>

Command flags is available to config init to alter its behaviour:

  • --force or FORCE_OVERWRITE: Force overwrite to file if exist
  • --type or CONFIG_TYPE: Set config type, whether to generate to global or local (cwd)
Changing Configuration

config edit can be used to open file editor for the config, using OS preferred text editor:

markxus config edit

Command flags is available to config edit to alter its behaviour:

  • --i or INIT_CONFIG: Initialize config if not exist
  • --type or CONFIG_TYPE: Set config type, whether to edit global or local config (cwd)

Individual config key can also be set using:

markxus config set <yaml_or_env_key> <value>

Must be noted, however, that NEXUS_API_KEY and GEN_AI_API_KEY cannot be set to config via this command and must be edited manually. Setting either field will configure OS credential storage instead of config (and obviously, --type will be ignored when setting either fields).

Command flags is available to config set to alter its behaviour:

  • --type or CONFIG_TYPE: Set config type, whether to set value to global or local config (cwd)
Available Options

All these options can also be set from env vars or CLI flag.

Google Generative AI
  • API key

    Flag: --genai-key, --gk
    Env: GEN_AI_API_KEY
    YAML: genai_api_key
    

    Self-explanatory. Required.

  • Model name

    Flag: --model, --m
    Env: GEN_AI_API_KEY
    YAML: genai_api_key
    

    LLM model used for conversion. Defaults to gemini-1.5-flash.

  • Prompt format

    Flag: --prompt, --p
    Env: GEN_AI_PROMPT_FORMAT
    YAML: genai_prompt_format
    

    Prompt format used to query the LLM. Defaults to prompt.txt.

    Prompt can be changed the direction of the resulting markdown.

Nexus Mods
  • API key

    Flag: --nexus-key, --nk
    Env: NEXUS_API_KEY
    YAML: nexus_api_key
    

    Self-explanatory. Required.

  • API mod url format

    Flag: --api-url-format, --af
    Env: NEXUS_URL_GET_MOD_FORMAT
    YAML: nexus_url_get_mod_format
    

    Url format to be used when querying for mod data. Defaults to:

    https://api.nexusmods.com/v1/games/%v/mods/%v.json

  • Mod page url format

    Flag: --page-url-format, --pf
    Env: NEXUS_URL_MOD_PAGE_FORMAT
    YAML: nexus_url_mod_page_format
    

    Mod page Url format to be used in markdown header generation. Defaults to:

    https://nexusmods.com/%v/mods/%v

Generation
  • Markdown header format

    Flag: --header-format, --hf
    Env: MARKDOWN_HEADER_FORMAT
    YAML: markdown_header_format
    

    Prompt format used to generate header of the markdown. Defaults to header.txt

    Header can be used to alter the resulting header of the markdown, in particular, in how the frontmatter is generated.

Helper
  • Fallback game code

    Flag: --game-code, --gc
    Env: FALLBACK_GAME_CODE
    YAML: fallback_game_code
    

    Game code to be used when not supplied in generation command.

    The command will fail when no game code found both in args or config.

Generate markdown

The most basic command to generate the markdown is:

markxus generate [$game_code] $mod_id

By default, it will output the resulting markdown to current working directory with format: {sanitized_mod_name}.md. It will also prompt permission to overwrite when mod already exist.

If $game_code is not supplied, it will fallback to config & env vars. If none are available, the program will exit.

Aside from configuration flags, these flags are also available to alter the command's behaviour:

  • --force or FORCE_OVERWRITE: Force overwrite conflicting file, if exist, skipping prompt.
Generate multiple markdowns

TBA

Programmatical Usage

Simplest example of usage:

package main

import (
	"context"
	"fmt"

	"github.com/slainless/markxus"
	"github.com/slainless/markxus/genai"
	"github.com/slainless/markxus/nexus"
	"github.com/slainless/markxus/resty"
)

func main() {
	nexusKey := nexus.WithApiKey("{{NEXUS_API_KEY}}")
	nexus, err := nexus.NewClient(nexusKey, nexus.WithHTTPDriver(resty.NewRestyClient()))
	if err != nil {
		panic(err)
	}

	genaiKey := genai.WithApiKey("{{GENERATIVE_AI_API_KEY}}")
	genai, err := genai.NewGenAiClient(context.TODO(), genaiKey)
	if err != nil {
		panic(err)
	}

	markxus := markxus.NewMarkxus(nexus, genai)
	mod, err := markxus.Generate(context.TODO(), "skyrimspecialedition", "68068")
	if err != nil {
		panic(err)
	}

	fmt.Printf("%+v", mod)
}

To run example above, you will need nexusmods API key and Google generative AI API key.

Developer Notes

This is my first time building a CLI using charmbracelet's bubbletea and I admit that the framework is a bit overwhelming for user that didn't come from Elm background, since I'm too used to React's reactive paradigm.

Building the TUI is actually the most time consuming part of this project, largely caused by my own inexperience in handling these libraries. I managed to make it work nonetheless 😁. But some parts of the CLI need some refactoring and improvement, specifically at some parts that still use pure huh form. Ideally, it should be incorporated into bubbletea modeling framework and make it work seamlessly with urfave/cli flow.

Documentation

Index

Constants

View Source
const DefaultGenAiModelName = "gemini-1.5-flash"
View Source
const DefaultUrlModPageFormat = "https://nexusmods.com/%v/mods/%v"

Variables

View Source
var DefaultGenAiPromptFormat string
View Source
var DefaultMarkdownHeaderFormat string
View Source
var DefaultMarkdownHeaderTemplate *template.Template

Functions

This section is empty.

Types

type CategoryIconMap

type CategoryIconMap struct {
	Id   int    `json:"category_id"`
	Name string `json:"name"`
	Icon string `json:"icon"`
}

type Generated

type Generated struct {
	Mod     nexus.SchemaMod
	Content string
	Header  string
	Error   error
}

type GenerationContext

type GenerationContext struct {
	// sorted by call sequence
	OnModFetched         OnModFetchedHook
	OnHeaderCreated      OnHeaderCreationHook
	OnLlmStreamConsuming LlmStreamConsumeHook
	CategoryIconMap      map[int]*CategoryIconMap
}

type GenerationContextOption

type GenerationContextOption func(ctx *GenerationContext)

func WithCategoryIconMap

func WithCategoryIconMap(cm []CategoryIconMap) GenerationContextOption

type LlmClient

type LlmClient interface {
	// If possible, [[LlmClient.Send]] must handle all common AI errors.
	// Only when it is not possible to handle such errors that
	// this method should return. Returning a non-empty string with error
	// will result in [[Generated]] with non-empty error.
	//
	// Some of those common error such as:
	//   - Max token limit
	//   - Safety error
	//   - Recitation error
	//
	// It is highly recommended to create a chat to allow resuming
	// in case max token limit hit. In this case, the Gen AI client
	// must manually send "continue" prompt and append the next response
	// batch to the result.
	Send(
		ctx context.Context,
		prompt string,
		mod *nexus.SchemaMod,
		onStreamConsume LlmStreamConsumeHook,
	) (string, error)
}

type LlmStreamConsumeHook

type LlmStreamConsumeHook func(ctx context.Context, streamData any, currentOutput *string) error

type Markxus

type Markxus struct {
	// contains filtered or unexported fields
}

func NewMarkxus

func NewMarkxus(nexusClient *nexus.Client, genAiClient LlmClient, options ...MarkxusOption) *Markxus

func (*Markxus) Generate

func (c *Markxus) Generate(
	ctx context.Context,
	gameCode string,
	modId string,
	options ...GenerationContextOption,
) (*Generated, error)

type MarkxusOption

type MarkxusOption func(*MarkxusOptions)

func WithMarkdownHeaderTemplate

func WithMarkdownHeaderTemplate(format *template.Template) MarkxusOption

Template will be exposed to [nexus.SchemaMod]

func WithPromptFormat

func WithPromptFormat(prompt string) MarkxusOption

Format should contains placeholder that will be filled with these parameters in sequence:

  • Mod description

Defaults to [DefaultGenAiPromptFormat]

func WithUrlModPageFormat

func WithUrlModPageFormat(format string) MarkxusOption

Format should contains 2 placeholder in this sequence:

  • Game code
  • Mod ID

Defaults to: [DefaultUrlModPageFormat]

type MarkxusOptions

type MarkxusOptions struct {
	GenAiPromptFormat      string
	UrlModPageFormat       string
	MarkdownHeaderTemplate *template.Template
}

type OnHeaderCreationHook

type OnHeaderCreationHook func(ctx context.Context, header string) error

type OnModFetchedHook

type OnModFetchedHook func(ctx context.Context, mod *nexus.SchemaMod) error

Directories

Path Synopsis
cli
markxus module
genai module
openai module
resty module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL