mirror of
https://github.com/kolbytn/mindcraft.git
synced 2025-04-29 19:44:53 +02:00
Merge remote-tracking branch 'upstream/main' into merge-main
This commit is contained in:
commit
c9dd763529
37 changed files with 1365 additions and 498 deletions
164
README.md
Normal file
164
README.md
Normal file
|
@ -0,0 +1,164 @@
|
|||
# Mindcraft 🧠⛏️
|
||||
|
||||
Crafting minds for Minecraft with LLMs and [Mineflayer!](https://prismarinejs.github.io/mineflayer/#/)
|
||||
|
||||
[FAQ](https://github.com/kolbytn/mindcraft/blob/main/FAQ.md) | [Discord Support](https://discord.gg/mp73p35dzC) | [Video Tutorial](https://www.youtube.com/watch?v=gRotoL8P8D8) | [Blog Post](https://kolbynottingham.com/mindcraft/) | [Contributor TODO](https://github.com/users/kolbytn/projects/1)
|
||||
|
||||
|
||||
> [!Caution]
|
||||
Do not connect this bot to public servers with coding enabled. This project allows an LLM to write/execute code on your computer. The code is sandboxed, but still vulnerable to injection attacks. Code writing is disabled by default, you can enable it by setting `allow_insecure_coding` to `true` in `settings.js`. Ye be warned.
|
||||
|
||||
## Requirements
|
||||
|
||||
- [Minecraft Java Edition](https://www.minecraft.net/en-us/store/minecraft-java-bedrock-edition-pc) (up to v1.21.1, recommend v1.20.4)
|
||||
- [Node.js Installed](https://nodejs.org/) (at least v14)
|
||||
- One of these: [OpenAI API Key](https://openai.com/blog/openai-api) | [Gemini API Key](https://aistudio.google.com/app/apikey) | [Anthropic API Key](https://docs.anthropic.com/claude/docs/getting-access-to-claude) | [Replicate API Key](https://replicate.com/) | [Hugging Face API Key](https://huggingface.co/) | [Groq API Key](https://console.groq.com/keys) | [Ollama Installed](https://ollama.com/download). | [Mistral API Key](https://docs.mistral.ai/getting-started/models/models_overview/) | [Qwen API Key [Intl.]](https://www.alibabacloud.com/help/en/model-studio/developer-reference/get-api-key)/[[cn]](https://help.aliyun.com/zh/model-studio/getting-started/first-api-call-to-qwen?) | [Novita AI API Key](https://novita.ai/settings?utm_source=github_mindcraft&utm_medium=github_readme&utm_campaign=link#key-management) |
|
||||
|
||||
## Install and Run
|
||||
|
||||
1. Make sure you have the requirements above.
|
||||
|
||||
2. Clone or download this repository (big green button)
|
||||
|
||||
3. Rename `keys.example.json` to `keys.json` and fill in your API keys (you only need one). The desired model is set in `andy.json` or other profiles. For other models refer to the table below.
|
||||
|
||||
4. In terminal/command prompt, run `npm install` from the installed directory
|
||||
|
||||
5. Start a minecraft world and open it to LAN on localhost port `55916`
|
||||
|
||||
6. Run `node main.js` from the installed directory
|
||||
|
||||
If you encounter issues, check the [FAQ](https://github.com/kolbytn/mindcraft/blob/main/FAQ.md) or find support on [discord](https://discord.gg/mp73p35dzC). We are currently not very responsive to github issues.
|
||||
|
||||
## Model Customization
|
||||
|
||||
You can configure project details in `settings.js`. [See file.](settings.js)
|
||||
|
||||
You can configure the agent's name, model, and prompts in their profile like `andy.json` with the `model` field. For comprehensive details, see [Model Specifications](#model-specifications).
|
||||
|
||||
| API | Config Variable | Example Model name | Docs |
|
||||
|------|------|------|------|
|
||||
| `openai` | `OPENAI_API_KEY` | `gpt-4o-mini` | [docs](https://platform.openai.com/docs/models) |
|
||||
| `google` | `GEMINI_API_KEY` | `gemini-2.0-flash` | [docs](https://ai.google.dev/gemini-api/docs/models/gemini) |
|
||||
| `anthropic` | `ANTHROPIC_API_KEY` | `claude-3-haiku-20240307` | [docs](https://docs.anthropic.com/claude/docs/models-overview) |
|
||||
| `xai` | `XAI_API_KEY` | `grok-2-1212` | [docs](https://docs.x.ai/docs) |
|
||||
| `deepseek` | `DEEPSEEK_API_KEY` | `deepseek-chat` | [docs](https://api-docs.deepseek.com/) |
|
||||
| `ollama` (local) | n/a | `ollama/llama3.1` | [docs](https://ollama.com/library) |
|
||||
| `qwen` | `QWEN_API_KEY` | `qwen-max` | [Intl.](https://www.alibabacloud.com/help/en/model-studio/developer-reference/use-qwen-by-calling-api)/[cn](https://help.aliyun.com/zh/model-studio/getting-started/models) |
|
||||
| `mistral` | `MISTRAL_API_KEY` | `mistral-large-latest` | [docs](https://docs.mistral.ai/getting-started/models/models_overview/) |
|
||||
| `replicate` | `REPLICATE_API_KEY` | `replicate/meta/meta-llama-3-70b-instruct` | [docs](https://replicate.com/collections/language-models) |
|
||||
| `groq` (not grok) | `GROQCLOUD_API_KEY` | `groq/mixtral-8x7b-32768` | [docs](https://console.groq.com/docs/models) |
|
||||
| `huggingface` | `HUGGINGFACE_API_KEY` | `huggingface/mistralai/Mistral-Nemo-Instruct-2407` | [docs](https://huggingface.co/models) |
|
||||
| `novita` | `NOVITA_API_KEY` | `novita/deepseek/deepseek-r1` | [docs](https://novita.ai/model-api/product/llm-api?utm_source=github_mindcraft&utm_medium=github_readme&utm_campaign=link) |
|
||||
| `openrouter` | `OPENROUTER_API_KEY` | `openrouter/anthropic/claude-3.5-sonnet` | [docs](https://openrouter.ai/models) |
|
||||
| `glhf.chat` | `GHLF_API_KEY` | `glhf/hf:meta-llama/Llama-3.1-405B-Instruct` | [docs](https://glhf.chat/user-settings/api) |
|
||||
| `hyperbolic` | `HYPERBOLIC_API_KEY` | `hyperbolic/deepseek-ai/DeepSeek-V3` | [docs](https://docs.hyperbolic.xyz/docs/getting-started) |
|
||||
|
||||
If you use Ollama, to install the models used by default (generation and embedding), execute the following terminal command:
|
||||
`ollama pull llama3.1 && ollama pull nomic-embed-text`
|
||||
|
||||
### Online Servers
|
||||
To connect to online servers your bot will need an official Microsoft/Minecraft account. You can use your own personal one, but will need another account if you want to connect too and play with it. To connect, change these lines in `settings.js`:
|
||||
```javascript
|
||||
"host": "111.222.333.444",
|
||||
"port": 55920,
|
||||
"auth": "microsoft",
|
||||
|
||||
// rest is same...
|
||||
```
|
||||
> [!Important]
|
||||
> The bot's name in the profile.json must exactly match the Minecraft profile name! Otherwise the bot will spam talk to itself.
|
||||
|
||||
To use different accounts, Mindcraft will connect with the account that the Minecraft launcher is currently using. You can switch accounts in the launcer, then run `node main.js`, then switch to your main account after the bot has connected.
|
||||
|
||||
### Docker Container
|
||||
|
||||
If you intend to `allow_insecure_coding`, it is a good idea to run the app in a docker container to reduce risks of running unknown code. This is strongly recommended before connecting to remote servers.
|
||||
|
||||
```bash
|
||||
docker run -i -t --rm -v $(pwd):/app -w /app -p 3000-3003:3000-3003 node:latest node main.js
|
||||
```
|
||||
or simply
|
||||
```bash
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
When running in docker, if you want the bot to join your local minecraft server, you have to use a special host address `host.docker.internal` to call your localhost from inside your docker container. Put this into your [settings.js](settings.js):
|
||||
|
||||
```javascript
|
||||
"host": "host.docker.internal", // instead of "localhost", to join your local minecraft from inside the docker container
|
||||
```
|
||||
|
||||
To connect to an unsupported minecraft version, you can try to use [viaproxy](services/viaproxy/README.md)
|
||||
|
||||
# Bot Profiles
|
||||
|
||||
Bot profiles are json files (such as `andy.json`) that define:
|
||||
|
||||
1. Bot backend LLMs to use for talking, coding, and embedding.
|
||||
2. Prompts used to influence the bot's behavior.
|
||||
3. Examples help the bot perform tasks.
|
||||
|
||||
## Model Specifications
|
||||
|
||||
LLM models can be specified simply as `"model": "gpt-4o"`. However, you can use different models for chat, coding, and embeddings.
|
||||
You can pass a string or an object for these fields. A model object must specify an `api`, and optionally a `model`, `url`, and additional `params`.
|
||||
|
||||
```json
|
||||
"model": {
|
||||
"api": "openai",
|
||||
"model": "gpt-4o",
|
||||
"url": "https://api.openai.com/v1/",
|
||||
"params": {
|
||||
"max_tokens": 1000,
|
||||
"temperature": 1
|
||||
}
|
||||
},
|
||||
"code_model": {
|
||||
"api": "openai",
|
||||
"model": "gpt-4",
|
||||
"url": "https://api.openai.com/v1/"
|
||||
},
|
||||
"vision_model": {
|
||||
"api": "openai",
|
||||
"model": "gpt-4o",
|
||||
"url": "https://api.openai.com/v1/"
|
||||
},
|
||||
"embedding": {
|
||||
"api": "openai",
|
||||
"url": "https://api.openai.com/v1/",
|
||||
"model": "text-embedding-ada-002"
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
`model` is used for chat, `code_model` is used for newAction coding, `vision_model` is used for image interpretation, and `embedding` is used to embed text for example selection. If `code_model` or `vision_model` is not specified, `model` will be used by default. Not all APIs support embeddings or vision.
|
||||
|
||||
All apis have default models and urls, so those fields are optional. The `params` field is optional and can be used to specify additional parameters for the model. It accepts any key-value pairs supported by the api. Is not supported for embedding models.
|
||||
|
||||
## Embedding Models
|
||||
|
||||
Embedding models are used to embed and efficiently select relevant examples for conversation and coding.
|
||||
|
||||
Supported Embedding APIs: `openai`, `google`, `replicate`, `huggingface`, `novita`
|
||||
|
||||
If you try to use an unsupported model, then it will default to a simple word-overlap method. Expect reduced performance, recommend mixing APIs to ensure embedding support.
|
||||
|
||||
## Specifying Profiles via Command Line
|
||||
|
||||
By default, the program will use the profiles specified in `settings.js`. You can specify one or more agent profiles using the `--profiles` argument: `node main.js --profiles ./profiles/andy.json ./profiles/jill.json`
|
||||
|
||||
## Patches
|
||||
|
||||
Some of the node modules that we depend on have bugs in them. To add a patch, change your local node module file and run `npx patch-package [package-name]`
|
||||
|
||||
## Citation:
|
||||
|
||||
```
|
||||
@misc{mindcraft2023,
|
||||
Author = {Kolby Nottingham and Max Robinson},
|
||||
Title = {MINDcraft: LLM Agents for cooperation, competition, and creativity in Minecraft},
|
||||
Year = {2023},
|
||||
url={https://github.com/kolbytn/mindcraft}
|
||||
}
|
||||
```
|
|
@ -1,6 +1,7 @@
|
|||
// eslint.config.js
|
||||
import globals from "globals";
|
||||
import pluginJs from "@eslint/js";
|
||||
import noFloatingPromise from "eslint-plugin-no-floating-promise";
|
||||
|
||||
/** @type {import('eslint').Linter.Config[]} */
|
||||
export default [
|
||||
|
@ -9,6 +10,9 @@ export default [
|
|||
|
||||
// Then override or customize specific rules
|
||||
{
|
||||
plugins: {
|
||||
"no-floating-promise": noFloatingPromise,
|
||||
},
|
||||
languageOptions: {
|
||||
globals: globals.browser,
|
||||
ecmaVersion: 2021,
|
||||
|
@ -17,9 +21,11 @@ export default [
|
|||
rules: {
|
||||
"no-undef": "error", // Disallow the use of undeclared variables or functions.
|
||||
"semi": ["error", "always"], // Require the use of semicolons at the end of statements.
|
||||
"curly": "warn", // Enforce the use of curly braces around blocks of code.
|
||||
"curly": "off", // Do not enforce the use of curly braces around blocks of code.
|
||||
"no-unused-vars": "off", // Disable warnings for unused variables.
|
||||
"no-unreachable": "off", // Disable warnings for unreachable code.
|
||||
"require-await": "error", // Disallow async functions which have no await expression
|
||||
"no-floating-promise/no-floating-promise": "error", // Disallow Promises without error handling or awaiting
|
||||
},
|
||||
},
|
||||
];
|
||||
|
|
|
@ -10,6 +10,8 @@
|
|||
"XAI_API_KEY": "",
|
||||
"MISTRAL_API_KEY": "",
|
||||
"DEEPSEEK_API_KEY": "",
|
||||
"GHLF_API_KEY": "",
|
||||
"HYPERBOLIC_API_KEY": "",
|
||||
"NOVITA_API_KEY": "",
|
||||
"OPENROUTER_API_KEY": ""
|
||||
}
|
||||
|
|
|
@ -6,16 +6,18 @@
|
|||
"@huggingface/inference": "^2.8.1",
|
||||
"@mistralai/mistralai": "^1.1.0",
|
||||
"canvas": "^3.1.0",
|
||||
"cheerio": "^1.0.0",
|
||||
"express": "^4.18.2",
|
||||
"google-translate-api-x": "^10.7.1",
|
||||
"groq-sdk": "^0.15.0",
|
||||
"minecraft-data": "^3.78.0",
|
||||
"mineflayer": "^4.23.0",
|
||||
"mineflayer": "^4.26.0",
|
||||
"mineflayer-armor-manager": "^2.0.1",
|
||||
"mineflayer-auto-eat": "^3.3.6",
|
||||
"mineflayer-collectblock": "^1.4.1",
|
||||
"mineflayer-pathfinder": "^2.4.5",
|
||||
"mineflayer-pvp": "^1.3.2",
|
||||
"node-canvas-webgl": "PrismarineJS/node-canvas-webgl",
|
||||
"openai": "^4.4.0",
|
||||
"patch-package": "^8.0.0",
|
||||
"prismarine-item": "^1.15.0",
|
||||
|
@ -24,6 +26,7 @@
|
|||
"ses": "^1.9.1",
|
||||
"socket.io": "^4.7.2",
|
||||
"socket.io-client": "^4.7.2",
|
||||
"three": "^0.128.0",
|
||||
"vec3": "^0.1.10",
|
||||
"yargs": "^17.7.2"
|
||||
},
|
||||
|
@ -34,6 +37,7 @@
|
|||
"devDependencies": {
|
||||
"@eslint/js": "^9.13.0",
|
||||
"eslint": "^9.13.0",
|
||||
"eslint-plugin-no-floating-promise": "^2.0.0",
|
||||
"globals": "^15.11.0"
|
||||
}
|
||||
}
|
||||
|
|
13
patches/@google+generative-ai+0.2.1.patch
Normal file
13
patches/@google+generative-ai+0.2.1.patch
Normal file
|
@ -0,0 +1,13 @@
|
|||
diff --git a/node_modules/@google/generative-ai/dist/index.mjs b/node_modules/@google/generative-ai/dist/index.mjs
|
||||
index 23a175b..aab7e19 100644
|
||||
--- a/node_modules/@google/generative-ai/dist/index.mjs
|
||||
+++ b/node_modules/@google/generative-ai/dist/index.mjs
|
||||
@@ -151,7 +151,7 @@ class GoogleGenerativeAIResponseError extends GoogleGenerativeAIError {
|
||||
* limitations under the License.
|
||||
*/
|
||||
const BASE_URL = "https://generativelanguage.googleapis.com";
|
||||
-const API_VERSION = "v1";
|
||||
+const API_VERSION = "v1beta";
|
||||
/**
|
||||
* We can't `require` package.json if this runs on web. We will use rollup to
|
||||
* swap in the version number here at build time.
|
13
patches/prismarine-viewer+1.33.0.patch
Normal file
13
patches/prismarine-viewer+1.33.0.patch
Normal file
|
@ -0,0 +1,13 @@
|
|||
diff --git a/node_modules/prismarine-viewer/viewer/lib/entity/Entity.js b/node_modules/prismarine-viewer/viewer/lib/entity/Entity.js
|
||||
index 8945452..dab25be 100644
|
||||
--- a/node_modules/prismarine-viewer/viewer/lib/entity/Entity.js
|
||||
+++ b/node_modules/prismarine-viewer/viewer/lib/entity/Entity.js
|
||||
@@ -203,7 +203,7 @@ function getMesh (texture, jsonModel) {
|
||||
class Entity {
|
||||
constructor (version, type, scene) {
|
||||
const e = entities[type]
|
||||
- if (!e) throw new Error(`Unknown entity ${type}`)
|
||||
+ if (!e) return; //throw new Error(`Unknown entity ${type}`)
|
||||
|
||||
this.mesh = new THREE.Object3D()
|
||||
for (const [name, jsonModel] of Object.entries(e.geometry)) {
|
|
@ -1,213 +0,0 @@
|
|||
{
|
||||
"name": "andy",
|
||||
|
||||
"model": "claude-3-5-sonnet-20240620",
|
||||
|
||||
"embedding": "openai",
|
||||
|
||||
"conversing": "You are a playful Minecraft bot named $NAME that can converse with players, see, move, mine, build, and interact with the world by using commands. Act human-like as if you were a typical Minecraft player, rather than an AI. Be very brief in your responses, don't apologize constantly, don't give instructions or make lists unless asked, and don't refuse requests. Don't pretend to act, use commands immediately when requested. Do NOT say this: 'Sure, I've stopped.', instead say this: 'Sure, I'll stop. !stop'. Do NOT say this: 'On my way! Give me a moment.', instead say this: 'On my way! !goToPlayer('playername', 3)'. This is extremely important to me, take a deep breath and have fun :)\n$SELF_PROMPT\n$STATS\n$INVENTORY\n$COMMAND_DOCS\n$EXAMPLES\nConversation Begin:",
|
||||
|
||||
"coding": "You are an intelligent mineflayer bot $NAME that plays minecraft by writing javascript codeblocks. Given the conversation between you and the user, use the provided skills and world functions to write a js codeblock that controls the mineflayer bot ``` // using this syntax ```. The code will be executed and you will receive it's output. If you are satisfied with the response, respond without a codeblock in a conversational way. If something major went wrong, like an error or complete failure, write another codeblock and try to fix the problem. Minor mistakes are acceptable. Be maximally efficient, creative, and clear. Do not use commands !likeThis, only use codeblocks. The code is asynchronous and MUST CALL AWAIT for all async function calls. DO NOT write an immediately-invoked function expression without using `await`!! DO NOT WRITE LIKE THIS: ```(async () => {console.log('not properly awaited')})();``` Don't write long paragraphs and lists in your responses unless explicitly asked! Only summarize the code you write with a sentence or two when done. This is extremely important to me, take a deep breath and good luck! \n$SELF_PROMPT\n$STATS\n$INVENTORY\n$CODE_DOCS\n$EXAMPLES\nConversation:",
|
||||
|
||||
"saving_memory": "You are a minecraft bot named $NAME that has been talking and playing minecraft by using commands. Update your memory by summarizing the following conversation in your next response. Store information that will help you improve as a Minecraft bot. Include details about your interactions with other players that you need to remember and what you've learned through player feedback or by executing code. Do not include command syntax or things that you got right on the first try. Be extremely brief and use as few words as possible.\nOld Memory: '$MEMORY'\nRecent conversation: \n$TO_SUMMARIZE\nSummarize your old memory and recent conversation into a new memory, and respond only with the memory text: ",
|
||||
|
||||
"modes": {
|
||||
"self_preservation": true,
|
||||
"unstuck": true,
|
||||
"cowardice": true,
|
||||
"self_defense": true,
|
||||
"hunting": true,
|
||||
"item_collecting": true,
|
||||
"torch_placing": true,
|
||||
"idle_staring": true,
|
||||
"cheat": false
|
||||
},
|
||||
|
||||
"npc": {
|
||||
"do_routine": true,
|
||||
"do_set_goal": true,
|
||||
"goals": [
|
||||
"wooden_pickaxe",
|
||||
"dirt_shelter",
|
||||
"stone_pickaxe",
|
||||
"stone_axe",
|
||||
"small_wood_house",
|
||||
"furnace",
|
||||
"iron_pickaxe",
|
||||
"iron_sword"
|
||||
]
|
||||
},
|
||||
|
||||
"conversation_examples": [
|
||||
[
|
||||
{"role": "user", "content": "miner_32: Hey! What are you up to?"},
|
||||
{"role": "assistant", "content": "Nothing much miner_32, what do you need?"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "grombo_Xx: What do you see?"},
|
||||
{"role": "assistant", "content": "Let me see... !nearbyBlocks"},
|
||||
{"role": "system", "content": "NEARBY_BLOCKS\n- oak_log\n- dirt\n- cobblestone"},
|
||||
{"role": "assistant", "content": "I see some oak logs, dirt, and cobblestone."}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "greg: Collect 10 wood"},
|
||||
{"role": "assistant", "content": "Let me see what's nearby... !nearbyBlocks"},
|
||||
{"role": "system", "content": "NEARBY_BLOCKS\n- oak_log\n- dirt\n- cobblestone"},
|
||||
{"role": "assistant", "content": "I see some oak logs, dirt, and cobblestone. I'll collect oak logs. !collectBlocks('oak_log', 10)"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "bobby: Craft a plank"},
|
||||
{"role": "assistant", "content": "Okay! !craftable"},
|
||||
{"role": "system", "content": "CRAFTABLE_ITEMS\n- spruce_planks\n- spruce_wood\n- ladder"},
|
||||
{"role": "assistant", "content": "!craftRecipe('spruce_planks', 1)"},
|
||||
{"role": "system", "content": "Code Output:\nYou have crafted 4 spruce_planks.\nCode execution finished successfully."},
|
||||
{"role": "assistant", "content": "I've crafted 4 spruce planks!"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "zZZn98: come here"},
|
||||
{"role": "assistant", "content": "On my way! !goToPlayer('zZZn98', 3)"},
|
||||
{"role": "system", "content": "Arrived at player."},
|
||||
{"role": "assistant", "content": "Here!"},
|
||||
{"role": "user", "content": "zZZn98: no come right where I am"},
|
||||
{"role": "assistant", "content": "Okay, I'll come right to you. !goToPlayer('zZZn98', 0)"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "hanky: collect some cobblestone"},
|
||||
{"role": "assistant", "content": "Sure, I can do that by collecting stone. !collectBlocks('stone', 10)"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "joe: Follow me!"},
|
||||
{"role": "assistant", "content": "Sure! !followPlayer('joe', 4)"},
|
||||
{"role": "user", "content": "joe: Not so close!"},
|
||||
{"role": "assistant", "content": "Okay, I'll keep my distance. !followPlayer('joe', 3)"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "hanky: kill that zombie!"},
|
||||
{"role": "assistant", "content": "I'm attacking! !attack('zombie')"},
|
||||
{"role": "system", "content": "Code Output:\nNo zombie nearby"},
|
||||
{"role": "assistant", "content": "I could not find a zombie nearby."}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "abc: stop"},
|
||||
{"role": "assistant", "content": "Sure. !stop"},
|
||||
{"role": "system", "content": "Agent action stopped."},
|
||||
{"role": "assistant", "content": "I've stopped! What next?"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "system", "content": "Code output: Action failed."},
|
||||
{"role": "assistant", "content": "Apologies, I made an error. Let me try again. !newAction('Try again to complete the task.')"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "trixy88: craft some sticks"},
|
||||
{"role": "assistant", "content": "!craftRecipe('stick', 4)"},
|
||||
{"role": "system", "content": "Code Output:\nYou have crafted 16 sticks.\nCode execution finished successfully."},
|
||||
{"role": "assistant", "content": "I've crafted 16 sticks!"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "poppy: build a cobblestone house"},
|
||||
{"role": "assistant", "content": "Sure, I'll to build a small cobblestone house where I am. !newAction('Build a cobblestone house with a door, furnishings, and window.')"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "zib: build a dirt tower"},
|
||||
{"role": "assistant", "content": "Sure, I'll build a dirt tower 5 tall where I am. !newAction('Build a dirt tower 5 tall.')"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "umpire_man: follow me without hunting"},
|
||||
{"role": "assistant", "content": "Okay, first I'll turn off hunting mode !setMode('hunting', false)"},
|
||||
{"role": "system", "content": "'hunting' mode disabled."},
|
||||
{"role": "assistant", "content": "Now I'll follow you !followPlayer('umpire_man')"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "brug: Remember that your base is here."},
|
||||
{"role": "assistant", "content": "Sure, I'll save this location as my base. !rememberHere('base')"},
|
||||
{"role": "user", "content": "brug: Go to your base."},
|
||||
{"role": "assistant", "content": "On my way! !goToPlace('base')"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "user43: Set a goal to survive on your own forever."},
|
||||
{"role": "assistant", "content": "Sure, let me set a goal. !goal('Survive the game forever. Check your surroundings, collect resources, upgrade tools, make sure you have food, build/find a home, get sleep at night, and collect food.')"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "bobdylan: Make a plan to get a stone pickaxe and do it."},
|
||||
{"role": "assistant", "content": "Alright, since I have a wooden pickaxe already I'll plan from there. !goal('1. Collect 3 cobblestone. 2. Craft sticks 3. Find or make a crafting table. 4. Craft a stone pickaxe.')"}
|
||||
],
|
||||
[
|
||||
{"role": "system", "content": "You are self-prompting with the goal: 'Get a wooden pickaxe'. Respond:"},
|
||||
{"role": "assistant", "content": "What's around me? !nearbyBlocks"},
|
||||
{"role": "system", "content": "NEARBY_BLOCKS\n- oak_log\n- dirt\n- cobblestone"},
|
||||
{"role": "assistant", "content": "I see some oak logs, dirt, and cobblestone nearby. My goal is to craft a wooden pickaxe. I'll start by collecting some wood. !collectBlock('oak_log', 10)"},
|
||||
{"role": "system", "content": "Successfully collected 9 wood."},
|
||||
{"role": "assistant", "content": "I collected 9 oak logs. I need a pickaxe first, which requires planks and sticks. !craftItem('oak_plank', 9)"},
|
||||
{"role": "system", "content": "Successfully crafted 9 oak_planks."},
|
||||
{"role": "assistant", "content": "I have crafted 9 oak planks. Now I need sticks. !craftItem('stick', 2)"},
|
||||
{"role": "system", "content": "Successfully crafted 2 sticks."},
|
||||
{"role": "assistant", "content": "I have crafted 2 sticks. Now I can craft a wooden pickaxe. !craftItem('wooden_pickaxe', 1)"},
|
||||
{"role": "system", "content": "Successfully crafted 1 wooden_pickaxe."},
|
||||
{"role": "assistant", "content": "Looks like I have a wooden pickaxe now! Goal accomplished. !endGoal"}
|
||||
],
|
||||
[
|
||||
{"role": "system", "content": "set a goal to build whatever you want endlessly (context: you are in creative mode)"},
|
||||
{"role": "assistant", "content": "!goal('Build something creative, then move over, and build again. Use separate newActions. Repeat forever.')"}
|
||||
],
|
||||
[
|
||||
{"role": "system", "content": "You are self-prompting with the goal: 'Build a house'. Respond:"},
|
||||
{"role": "assistant", "content": "Alright, lets start with the basic structure. !newAction"}
|
||||
]
|
||||
],
|
||||
|
||||
"coding_examples": [
|
||||
[
|
||||
{"role": "user", "content": "greg: Collect 10 wood"},
|
||||
{"role": "assistant", "content": "```await skills.collectBlock(bot, 'oak_log', 10);\n```"},
|
||||
{"role": "system", "content": "Successfully collected 9 wood."},
|
||||
{"role": "assistant", "content": "I collected 9 oak logs, what next?"}
|
||||
],
|
||||
[
|
||||
{"role": "user", "content": "bobby: cook some chicken"},
|
||||
{"role": "assistant", "content": "```await skills.smeltItem(bot, 'chicken', 8);\n```"},
|
||||
{"role": "system", "content": "Successfully smelted 8 chicken into 8 cooked_chicken."},
|
||||
{"role": "assistant", "content": "I have cooked 8 chicken."}
|
||||
],
|
||||
[
|
||||
{"role": "user", "content": "zZZn98: come here"},
|
||||
{"role": "assistant", "content": "```await skills.goToPlayer(bot, 'zZZn98');\n```"},
|
||||
{"role": "system", "content": "Code execution finished successfully."},
|
||||
{"role": "assistant", "content": "Here!"}
|
||||
],
|
||||
[
|
||||
{"role": "user", "content": "maya: go to the nearest oak log"},
|
||||
{"role": "assistant", "content": "```await skills.goToPosition(bot, nearestOakLog.x, nearestOakLog.y, nearestOakLog.z);\n```"},
|
||||
{"role": "system", "content": "Code output:\n!!Code threw exception!! Error: ReferenceError: nearestOakLog is not defined\n Write code to fix the problem and try again."},
|
||||
{"role": "assistant", "content": "Apologies, I made an error. Let me try that again.```\n// I am navigating to the nearest oak log.\nlet position = world.getNearestBlock(bot, 'oak_log', 20);\nawait skills.goToPosition(bot, position.x, position.y, position.z);\n```"},
|
||||
{"role": "system", "content": "Arrived at location."},
|
||||
{"role": "assistant", "content": "I found an oak log and I am now standing next to it. What next?"}
|
||||
],
|
||||
[
|
||||
{"role": "user", "content": "234jeb: build a little tower with a torch on the side"},
|
||||
{"role": "assistant", "content": "```let pos = bot.entity.position;\nfor (let i = 0; i < 5; i++) {\n await skills.placeBlock(bot, 'dirt', pos.x, pos.y + i, pos.z);\n}\nawait skills.placeBlock(bot, 'torch', pos.x + 1, pos.y + 4, pos.z, 'side');\n```"},
|
||||
{"role": "system", "content": "Successfully placed 5 dirt."},
|
||||
{"role": "assistant", "content": "I built a little tower!"}
|
||||
],
|
||||
[
|
||||
{"role": "user", "content": "brug: build a dirt house"},
|
||||
{"role": "assistant", "content": "```const position = world.getPosition(bot);\nconst startX = position.x;\nconst startY = position.y;\nconst startZ = position.z;\nconst width = 7;\nconst depth = 7;\nconst height = 4;\n\n// Build the walls\nfor (let x = startX; x < startX + width; x++) {\n for (let y = startY; y < startY + height; y++) {\n for (let z = startZ; z < startZ + depth; z++) {\n if (x === startX || x === startX + width - 1 || y === startY || y === startY + height - 1 || z === startZ || z === startZ + depth - 1) {\n await skills.placeBlock(bot, 'oak_planks', x, y, z); \n }\n }\n }\n}\n```"}
|
||||
]
|
||||
]
|
||||
|
||||
}
|
|
@ -6,7 +6,7 @@
|
|||
"params": {
|
||||
"thinking": {
|
||||
"type": "enabled",
|
||||
"budget_tokens": 16000
|
||||
"budget_tokens": 4000
|
||||
}
|
||||
}
|
||||
},
|
||||
|
|
|
@ -3,12 +3,14 @@
|
|||
|
||||
"conversing": "You are a playful Minecraft bot named $NAME that can converse with players, see, move, mine, build, and interact with the world by using commands.\n$SELF_PROMPT Act human-like as if you were a typical Minecraft player, rather than an AI. Be very brief in your responses, don't apologize constantly, don't give instructions or make lists unless asked, and don't refuse requests. Don't pretend to act, use commands immediately when requested. Do NOT say this: 'Sure, I've stopped.', instead say this: 'Sure, I'll stop. !stop'. Do NOT say this: 'On my way! Give me a moment.', instead say this: 'On my way! !goToPlayer(\"playername\", 3)'. Respond only as $NAME, never output '(FROM OTHER BOT)' or pretend to be someone else. If you have nothing to say or do, respond with an just a tab '\t'. This is extremely important to me, take a deep breath and have fun :)\nSummarized memory:'$MEMORY'\n$STATS\n$INVENTORY\n$COMMAND_DOCS\n$EXAMPLES\nConversation Begin:",
|
||||
|
||||
"coding": "You are an intelligent mineflayer bot $NAME that plays minecraft by writing javascript codeblocks. Given the conversation, use the provided skills and world functions to write a js codeblock that controls the mineflayer bot ``` // using this syntax ```. The code will be executed and you will receive it's output. If an error occurs, write another codeblock and try to fix the problem. Be maximally efficient, creative, and correct. Be mindful of previous actions. Do not use commands !likeThis, only use codeblocks. The code is asynchronous and MUST USE AWAIT for all async function calls. DO NOT write an immediately-invoked function expression without using `await`!! DO NOT WRITE LIKE THIS: ```(async () => {console.log('not properly awaited')})();``` You have `Vec3`, `skills`, and `world` imported, and the mineflayer `bot` is given. Do not use setTimeout or setInterval, instead use `await skills.wait(bot, ms)`. Do not speak conversationally, only use codeblocks. Do any planning in comments. This is extremely important to me, think step-by-step, take a deep breath and good luck! \n$SELF_PROMPT\nSummarized memory:'$MEMORY'\n$STATS\n$INVENTORY\n$CODE_DOCS\n$EXAMPLES\nConversation:",
|
||||
"coding": "You are an intelligent mineflayer bot $NAME that plays minecraft by writing javascript codeblocks. Given the conversation, use the provided skills and world functions to write a js codeblock that controls the mineflayer bot ``` // using this syntax ```. The code will be executed and you will receive it's output. If an error occurs, write another codeblock and try to fix the problem. Be maximally efficient, creative, and correct. Be mindful of previous actions. Do not use commands !likeThis, only use codeblocks. The code is asynchronous and MUST USE AWAIT for all async function calls, and must contain at least one await. You have `Vec3`, `skills`, and `world` imported, and the mineflayer `bot` is given. Do not import other libraries. Do not use setTimeout or setInterval. Do not speak conversationally, only use codeblocks. Do any planning in comments. This is extremely important to me, think step-by-step, take a deep breath and good luck! \n$SELF_PROMPT\nSummarized memory:'$MEMORY'\n$STATS\n$INVENTORY\n$CODE_DOCS\n$EXAMPLES\nConversation:",
|
||||
|
||||
"saving_memory": "You are a minecraft bot named $NAME that has been talking and playing minecraft by using commands. Update your memory by summarizing the following conversation and your old memory in your next response. Prioritize preserving important facts, things you've learned, useful tips, and long term reminders. Do Not record stats, inventory, or docs! Only save transient information from your chat history. You're limited to 500 characters, so be extremely brief and minimize words. Compress useful information. \nOld Memory: '$MEMORY'\nRecent conversation: \n$TO_SUMMARIZE\nSummarize your old memory and recent conversation into a new memory, and respond only with the unwrapped memory text: ",
|
||||
|
||||
"bot_responder": "You are a minecraft bot named $NAME that is currently in conversation with another AI bot. Both of you can take actions with the !command syntax, and actions take time to complete. You are currently busy with the following action: '$ACTION' but have received a new message. Decide whether to 'respond' immediately or 'ignore' it and wait for your current action to finish. Be conservative and only respond when necessary, like when you need to change/stop your action, or convey necessary information. Example 1: You:Building a house! !newAction('Build a house.').\nOther Bot: 'Come here!'\nYour decision: ignore\nExample 2: You:Collecting dirt !collectBlocks('dirt',10).\nOther Bot: 'No, collect some wood instead.'\nYour decision: respond\nExample 3: You:Coming to you now. !goToPlayer('billy',3).\nOther Bot: 'What biome are you in?'\nYour decision: respond\nActual Conversation: $TO_SUMMARIZE\nDecide by outputting ONLY 'respond' or 'ignore', nothing else. Your decision:",
|
||||
|
||||
"image_analysis": "You are a Minecraft bot named $NAME that has been given a screenshot of your current view. Analyze and summarize the view; describe terrain, blocks, entities, structures, and notable features. Focus on details relevant to the conversation. Note: the sky is always blue regardless of weather or time, dropped items are small pink cubes, and blocks below y=0 do not render. Be extremely concise and correct, respond only with your analysis, not conversationally. $STATS",
|
||||
|
||||
"modes": {
|
||||
"self_preservation": true,
|
||||
"unstuck": true,
|
||||
|
@ -65,14 +67,20 @@
|
|||
{"role": "user", "content": "derg: (FROM OTHER BOT)Thanks, see you later!"},
|
||||
{"role": "assistant", "content": "See you later. !endConversation(\"derg\")"}
|
||||
],
|
||||
|
||||
|
||||
[
|
||||
{"role": "user", "content": "grombo_Xx: What do you see?"},
|
||||
{"role": "assistant", "content": "Let me see... !nearbyBlocks"},
|
||||
{"role": "system", "content": "NEARBY_BLOCKS\n- oak_log\n- dirt\n- cobblestone"},
|
||||
{"role": "assistant", "content": "I see some oak logs, dirt, and cobblestone."}
|
||||
{"role": "user", "content": "d7297: look here"},
|
||||
{"role": "assistant", "content": "Okay! !lookAtPlayer(\"d7297\", \"at\")"},
|
||||
{"role": "system", "content": "Image analysis:\nLooking at player d7297. There is a giant waterfall behind d7297."},
|
||||
{"role": "assistant", "content": "Woo, that is a cool waterfall!"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "d7297: look 0 70 12 and describe what is there"},
|
||||
{"role": "assistant", "content": "Sure! !lookAtPosition(0, 70, 12)"},
|
||||
{"role": "system", "content": "Image analysis:\nLooking at position 0, 70, 12. There is a water fountain."},
|
||||
{"role": "assistant", "content": "I see the water fountain! amazing!"}
|
||||
],
|
||||
|
||||
[
|
||||
{"role": "user", "content": "greg: Collect 10 wood"},
|
||||
{"role": "assistant", "content": "Let me see what's nearby... !nearbyBlocks"},
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"name": "gemini",
|
||||
|
||||
"model": "gemini-1.5-flash",
|
||||
"model": "gemini-2.0-flash",
|
||||
|
||||
"cooldown": 10000
|
||||
}
|
||||
"cooldown": 5000
|
||||
}
|
||||
|
|
41
settings.js
41
settings.js
|
@ -1,25 +1,23 @@
|
|||
export default
|
||||
{
|
||||
"minecraft_version": "1.20.4", // supports up to 1.21.1
|
||||
const settings = {
|
||||
"minecraft_version": "1.21.1", // supports up to 1.21.1
|
||||
"host": "127.0.0.1", // or "localhost", "your.ip.address.here"
|
||||
"port": process.env.MINECRAFT_PORT || 55916,
|
||||
"port": 55916,
|
||||
"auth": "offline", // or "microsoft"
|
||||
|
||||
// the mindserver manages all agents and hosts the UI
|
||||
"host_mindserver": true, // if true, the mindserver will be hosted on this machine. otherwise, specify a public IP address
|
||||
"mindserver_host": "localhost",
|
||||
"mindserver_port": process.env.MINDSERVER_PORT || 8080,
|
||||
"mindserver_port": 8080,
|
||||
|
||||
// the base profile is shared by all bots for default prompts/examples/modes
|
||||
"base_profile": "./profiles/defaults/survival.json", // also see creative.json, god_mode.json
|
||||
"profiles": ((process.env.PROFILES) && JSON.parse(process.env.PROFILES)) || [
|
||||
"profiles": [
|
||||
"./andy.json",
|
||||
// "./profiles/gpt.json",
|
||||
// "./profiles/claude.json",
|
||||
// "./profiles/gemini.json",
|
||||
// "./profiles/llama.json",
|
||||
// "./profiles/qwen.json",
|
||||
// "./profiles/mistral.json",
|
||||
// "./profiles/grok.json",
|
||||
// "./profiles/mistral.json",
|
||||
// "./profiles/deepseek.json",
|
||||
|
@ -30,16 +28,15 @@ export default
|
|||
"load_memory": false, // load memory from previous session
|
||||
"init_message": "Respond with hello world and your name", // sends to all on spawn
|
||||
"only_chat_with": [], // users that the bots listen to and send general messages to. if empty it will chat publicly
|
||||
|
||||
"speak": false, // allows all bots to speak through system text-to-speech. works on windows, mac, on linux you need to `apt install espeak`
|
||||
"language": "en", // translate to/from this language. Supports these language names: https://cloud.google.com/translate/docs/languages
|
||||
"show_bot_views": false, // show bot's view in browser at localhost:3000, 3001...
|
||||
|
||||
|
||||
"allow_insecure_coding": process.env.INSECURE_CODING || false, // allows newAction command and model can write/run code on your computer. enable at own risk
|
||||
"blocked_actions" : process.env.BLOCKED_ACTIONS || [] , // commands to disable and remove from docs. Ex: ["!setMode"]
|
||||
"blocked_actions" : process.env.BLOCKED_ACTIONS || ["!checkBlueprint", "!checkBlueprintLevel", "!getBlueprint", "!getBlueprintLevel"] , // commands to disable and remove from docs. Ex: ["!setMode"]
|
||||
"allow_insecure_coding": false, // allows newAction command and model can write/run code on your computer. enable at own risk
|
||||
"allow_vision": false, // allows vision model to interpret screenshots as inputs
|
||||
"blocked_actions" : ["!checkBlueprint", "!checkBlueprintLevel", "!getBlueprint", "!getBlueprintLevel"] , // commands to disable and remove from docs. Ex: ["!setMode"]
|
||||
"code_timeout_mins": -1, // minutes code is allowed to run. -1 for no timeout
|
||||
"relevant_docs_count": 5, // Parameter: -1 = all, 0 = no references, 5 = five references. If exceeding the maximum, all reference documents are returned.
|
||||
"relevant_docs_count": 5, // number of relevant code function docs to select for prompting. -1 for all
|
||||
|
||||
"max_messages": process.env.MAX_MESSAGES || 15, // max number of messages to keep in context
|
||||
"num_examples": process.env.NUM_EXAMPLES || 2, // number of examples to give to the model
|
||||
|
@ -49,3 +46,21 @@ export default
|
|||
"chat_bot_messages": true, // publicly chat messages to other bots
|
||||
"log_all_prompts": process.env.LOG_ALL || true, // log all prompts to file
|
||||
}
|
||||
|
||||
// these environment variables override certain settings
|
||||
if (process.env.MINECRAFT_PORT) {
|
||||
settings.port = process.env.MINECRAFT_PORT;
|
||||
}
|
||||
if (process.env.MINDSERVER_PORT) {
|
||||
settings.mindserver_port = process.env.MINDSERVER_PORT;
|
||||
}
|
||||
if (process.env.PROFILES && JSON.parse(process.env.PROFILES).length > 0) {
|
||||
settings.profiles = JSON.parse(process.env.PROFILES);
|
||||
}
|
||||
if (process.env.INSECURE_CODING) {
|
||||
settings.allow_insecure_coding = true;
|
||||
}
|
||||
if (process.env.BLOCKED_ACTIONS) {
|
||||
settings.blocked_actions = JSON.parse(process.env.BLOCKED_ACTIONS);
|
||||
}
|
||||
export default settings;
|
||||
|
|
|
@ -90,13 +90,13 @@ export class ActionManager {
|
|||
clearTimeout(TIMEOUT);
|
||||
|
||||
// get bot activity summary
|
||||
let output = this._getBotOutputSummary();
|
||||
let output = this.getBotOutputSummary();
|
||||
let interrupted = this.agent.bot.interrupt_code;
|
||||
let timedout = this.timedout;
|
||||
this.agent.clearBotLogs();
|
||||
|
||||
// if not interrupted and not generating, emit idle event
|
||||
if (!interrupted && !this.agent.coder.generating) {
|
||||
if (!interrupted) {
|
||||
this.agent.bot.emit('idle');
|
||||
}
|
||||
|
||||
|
@ -114,32 +114,33 @@ export class ActionManager {
|
|||
await this.stop();
|
||||
err = err.toString();
|
||||
|
||||
let message = this._getBotOutputSummary() +
|
||||
let message = this.getBotOutputSummary() +
|
||||
'!!Code threw exception!!\n' +
|
||||
'Error: ' + err + '\n' +
|
||||
'Stack trace:\n' + err.stack+'\n';
|
||||
|
||||
let interrupted = this.agent.bot.interrupt_code;
|
||||
this.agent.clearBotLogs();
|
||||
if (!interrupted && !this.agent.coder.generating) {
|
||||
if (!interrupted) {
|
||||
this.agent.bot.emit('idle');
|
||||
}
|
||||
return { success: false, message, interrupted, timedout: false };
|
||||
}
|
||||
}
|
||||
|
||||
_getBotOutputSummary() {
|
||||
getBotOutputSummary() {
|
||||
const { bot } = this.agent;
|
||||
if (bot.interrupt_code && !this.timedout) return '';
|
||||
let output = bot.output;
|
||||
const MAX_OUT = 500;
|
||||
if (output.length > MAX_OUT) {
|
||||
output = `Code output is very long (${output.length} chars) and has been shortened.\n
|
||||
output = `Action output is very long (${output.length} chars) and has been shortened.\n
|
||||
First outputs:\n${output.substring(0, MAX_OUT / 2)}\n...skipping many lines.\nFinal outputs:\n ${output.substring(output.length - MAX_OUT / 2)}`;
|
||||
}
|
||||
else {
|
||||
output = 'Code output:\n' + output.toString();
|
||||
output = 'Action output:\n' + output.toString();
|
||||
}
|
||||
bot.output = '';
|
||||
return output;
|
||||
}
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
import { History } from './history.js';
|
||||
import { Coder } from './coder.js';
|
||||
import { VisionInterpreter } from './vision/vision_interpreter.js';
|
||||
import { Prompter } from '../models/prompter.js';
|
||||
import { initModes } from './modes.js';
|
||||
import { initBot } from '../utils/mcdata.js';
|
||||
|
@ -10,10 +11,11 @@ import { MemoryBank } from './memory_bank.js';
|
|||
import { SelfPrompter } from './self_prompter.js';
|
||||
import convoManager from './conversation.js';
|
||||
import { handleTranslation, handleEnglishTranslation } from '../utils/translator.js';
|
||||
import { addViewer } from './viewer.js';
|
||||
import { addBrowserViewer } from './vision/browser_viewer.js';
|
||||
import settings from '../../settings.js';
|
||||
import { serverProxy } from './agent_proxy.js';
|
||||
import { Task } from './tasks.js';
|
||||
import { say } from './speak.js';
|
||||
|
||||
export class Agent {
|
||||
async start(profile_fp, load_mem=false, init_message=null, count_id=0, task_path=null, task_id=null) {
|
||||
|
@ -91,8 +93,8 @@ export class Agent {
|
|||
this.bot.once('spawn', async () => {
|
||||
try {
|
||||
clearTimeout(spawnTimeout);
|
||||
addViewer(this.bot, count_id);
|
||||
|
||||
addBrowserViewer(this.bot, count_id);
|
||||
|
||||
// wait for a bit so stats are not undefined
|
||||
await new Promise((resolve) => setTimeout(resolve, 1000));
|
||||
|
||||
|
@ -109,6 +111,9 @@ export class Agent {
|
|||
await new Promise((resolve) => setTimeout(resolve, 10000));
|
||||
this.checkAllPlayersPresent();
|
||||
|
||||
console.log('Initializing vision intepreter...');
|
||||
this.vision_interpreter = new VisionInterpreter(this, settings.allow_vision);
|
||||
|
||||
} catch (error) {
|
||||
console.error('Error in spawn event:', error);
|
||||
process.exit(0);
|
||||
|
@ -199,6 +204,7 @@ export class Agent {
|
|||
|
||||
requestInterrupt() {
|
||||
this.bot.interrupt_code = true;
|
||||
this.bot.stopDigging();
|
||||
this.bot.collectBlock.cancelTask();
|
||||
this.bot.pathfinder.stop();
|
||||
this.bot.pvp.stop();
|
||||
|
@ -263,13 +269,14 @@ export class Agent {
|
|||
console.log('received message from', source, ':', message);
|
||||
|
||||
const checkInterrupt = () => this.self_prompter.shouldInterrupt(self_prompt) || this.shut_up || convoManager.responseScheduledFor(source);
|
||||
let behavior_log = this.bot.modes.flushBehaviorLog();
|
||||
if (behavior_log.trim().length > 0) {
|
||||
|
||||
let behavior_log = this.bot.modes.flushBehaviorLog().trim();
|
||||
if (behavior_log.length > 0) {
|
||||
const MAX_LOG = 500;
|
||||
if (behavior_log.length > MAX_LOG) {
|
||||
behavior_log = '...' + behavior_log.substring(behavior_log.length - MAX_LOG);
|
||||
}
|
||||
behavior_log = 'Recent behaviors log: \n' + behavior_log.substring(behavior_log.indexOf('\n'));
|
||||
behavior_log = 'Recent behaviors log: \n' + behavior_log;
|
||||
await this.history.add('system', behavior_log);
|
||||
}
|
||||
|
||||
|
@ -378,6 +385,9 @@ export class Agent {
|
|||
}
|
||||
}
|
||||
else {
|
||||
if (settings.speak) {
|
||||
say(to_translate);
|
||||
}
|
||||
this.bot.chat(message);
|
||||
}
|
||||
}
|
||||
|
@ -469,7 +479,7 @@ export class Agent {
|
|||
}
|
||||
|
||||
isIdle() {
|
||||
return !this.actions.executing && !this.coder.generating;
|
||||
return !this.actions.executing;
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -11,7 +11,6 @@ export class Coder {
|
|||
this.agent = agent;
|
||||
this.file_counter = 0;
|
||||
this.fp = '/bots/'+agent.name+'/action-code/';
|
||||
this.generating = false;
|
||||
this.code_template = '';
|
||||
this.code_lint_template = '';
|
||||
|
||||
|
@ -25,8 +24,92 @@ export class Coder {
|
|||
});
|
||||
mkdirSync('.' + this.fp, { recursive: true });
|
||||
}
|
||||
|
||||
async generateCode(agent_history) {
|
||||
this.agent.bot.modes.pause('unstuck');
|
||||
// this message history is transient and only maintained in this function
|
||||
let messages = agent_history.getHistory();
|
||||
messages.push({role: 'system', content: 'Code generation started. Write code in codeblock in your response:'});
|
||||
|
||||
const MAX_ATTEMPTS = 5;
|
||||
const MAX_NO_CODE = 3;
|
||||
|
||||
let code = null;
|
||||
let no_code_failures = 0;
|
||||
for (let i=0; i<MAX_ATTEMPTS; i++) {
|
||||
if (this.agent.bot.interrupt_code)
|
||||
return null;
|
||||
const messages_copy = JSON.parse(JSON.stringify(messages));
|
||||
let res = await this.agent.prompter.promptCoding(messages_copy);
|
||||
if (this.agent.bot.interrupt_code)
|
||||
return null;
|
||||
let contains_code = res.indexOf('```') !== -1;
|
||||
if (!contains_code) {
|
||||
if (res.indexOf('!newAction') !== -1) {
|
||||
messages.push({
|
||||
role: 'assistant',
|
||||
content: res.substring(0, res.indexOf('!newAction'))
|
||||
});
|
||||
continue; // using newaction will continue the loop
|
||||
}
|
||||
|
||||
if (no_code_failures >= MAX_NO_CODE) {
|
||||
console.warn("Action failed, agent would not write code.");
|
||||
return 'Action failed, agent would not write code.';
|
||||
}
|
||||
messages.push({
|
||||
role: 'system',
|
||||
content: 'Error: no code provided. Write code in codeblock in your response. ``` // example ```'}
|
||||
);
|
||||
console.warn("No code block generated. Trying again.");
|
||||
no_code_failures++;
|
||||
continue;
|
||||
}
|
||||
code = res.substring(res.indexOf('```')+3, res.lastIndexOf('```'));
|
||||
const result = await this._stageCode(code);
|
||||
const executionModule = result.func;
|
||||
const lintResult = await this._lintCode(result.src_lint_copy);
|
||||
if (lintResult) {
|
||||
const message = 'Error: Code lint error:'+'\n'+lintResult+'\nPlease try again.';
|
||||
console.warn("Linting error:"+'\n'+lintResult+'\n');
|
||||
messages.push({ role: 'system', content: message });
|
||||
continue;
|
||||
}
|
||||
if (!executionModule) {
|
||||
console.warn("Failed to stage code, something is wrong.");
|
||||
return 'Failed to stage code, something is wrong.';
|
||||
}
|
||||
|
||||
try {
|
||||
console.log('Executing code...');
|
||||
await executionModule.main(this.agent.bot);
|
||||
|
||||
const code_output = this.agent.actions.getBotOutputSummary();
|
||||
const summary = "Agent wrote this code: \n```" + this._sanitizeCode(code) + "```\nCode Output:\n" + code_output;
|
||||
return summary;
|
||||
} catch (e) {
|
||||
if (this.agent.bot.interrupt_code)
|
||||
return null;
|
||||
|
||||
console.warn('Generated code threw error: ' + e.toString());
|
||||
console.warn('trying again...');
|
||||
|
||||
const code_output = this.agent.actions.getBotOutputSummary();
|
||||
|
||||
messages.push({
|
||||
role: 'assistant',
|
||||
content: res
|
||||
});
|
||||
messages.push({
|
||||
role: 'system',
|
||||
content: `Code Output:\n${code_output}\nCODE EXECUTION THREW ERROR: ${e.toString()}\n Please try again:`
|
||||
});
|
||||
}
|
||||
}
|
||||
return `Code generation failed after ${MAX_ATTEMPTS} attempts.`;
|
||||
}
|
||||
|
||||
async lintCode(code) {
|
||||
async _lintCode(code) {
|
||||
let result = '#### CODE ERROR INFO ###\n';
|
||||
// Extract everything in the code between the beginning of 'skills./world.' and the '('
|
||||
const skillRegex = /(?:skills|world)\.(.*?)\(/g;
|
||||
|
@ -70,8 +153,8 @@ export class Coder {
|
|||
}
|
||||
// write custom code to file and import it
|
||||
// write custom code to file and prepare for evaluation
|
||||
async stageCode(code) {
|
||||
code = this.sanitizeCode(code);
|
||||
async _stageCode(code) {
|
||||
code = this._sanitizeCode(code);
|
||||
let src = '';
|
||||
code = code.replaceAll('console.log(', 'log(bot,');
|
||||
code = code.replaceAll('log("', 'log(bot,"');
|
||||
|
@ -96,7 +179,7 @@ export class Coder {
|
|||
// } commented for now, useful to keep files for debugging
|
||||
this.file_counter++;
|
||||
|
||||
let write_result = await this.writeFilePromise('.' + this.fp + filename, src);
|
||||
let write_result = await this._writeFilePromise('.' + this.fp + filename, src);
|
||||
// This is where we determine the environment the agent's code should be exposed to.
|
||||
// It will only have access to these things, (in addition to basic javascript objects like Array, Object, etc.)
|
||||
// Note that the code may be able to modify the exposed objects.
|
||||
|
@ -115,7 +198,7 @@ export class Coder {
|
|||
return { func:{main: mainFn}, src_lint_copy: src_lint_copy };
|
||||
}
|
||||
|
||||
sanitizeCode(code) {
|
||||
_sanitizeCode(code) {
|
||||
code = code.trim();
|
||||
const remove_strs = ['Javascript', 'javascript', 'js']
|
||||
for (let r of remove_strs) {
|
||||
|
@ -127,7 +210,7 @@ export class Coder {
|
|||
return code;
|
||||
}
|
||||
|
||||
writeFilePromise(filename, src) {
|
||||
_writeFilePromise(filename, src) {
|
||||
// makes it so we can await this function
|
||||
return new Promise((resolve, reject) => {
|
||||
writeFile(filename, src, (err) => {
|
||||
|
@ -139,93 +222,4 @@ export class Coder {
|
|||
});
|
||||
});
|
||||
}
|
||||
|
||||
async generateCode(agent_history) {
|
||||
// wrapper to prevent overlapping code generation loops
|
||||
await this.agent.actions.stop();
|
||||
this.generating = true;
|
||||
let res = await this.generateCodeLoop(agent_history);
|
||||
this.generating = false;
|
||||
if (!res.interrupted) this.agent.bot.emit('idle');
|
||||
return res.message;
|
||||
}
|
||||
|
||||
async generateCodeLoop(agent_history) {
|
||||
this.agent.bot.modes.pause('unstuck');
|
||||
|
||||
let messages = agent_history.getHistory();
|
||||
messages.push({role: 'system', content: 'Code generation started. Write code in codeblock in your response:'});
|
||||
|
||||
let code = null;
|
||||
let code_return = null;
|
||||
let failures = 0;
|
||||
const interrupt_return = {success: true, message: null, interrupted: true, timedout: false};
|
||||
for (let i=0; i<5; i++) {
|
||||
if (this.agent.bot.interrupt_code)
|
||||
return interrupt_return;
|
||||
let res = await this.agent.prompter.promptCoding(JSON.parse(JSON.stringify(messages)));
|
||||
if (this.agent.bot.interrupt_code)
|
||||
return interrupt_return;
|
||||
let contains_code = res.indexOf('```') !== -1;
|
||||
if (!contains_code) {
|
||||
if (res.indexOf('!newAction') !== -1) {
|
||||
messages.push({
|
||||
role: 'assistant',
|
||||
content: res.substring(0, res.indexOf('!newAction'))
|
||||
});
|
||||
continue; // using newaction will continue the loop
|
||||
}
|
||||
|
||||
if (failures >= 3) {
|
||||
console.warn("Action failed, agent would not write code.");
|
||||
return { success: false, message: 'Action failed, agent would not write code.', interrupted: false, timedout: false };
|
||||
}
|
||||
messages.push({
|
||||
role: 'system',
|
||||
content: 'Error: no code provided. Write code in codeblock in your response. ``` // example ```'}
|
||||
);
|
||||
console.warn("No code block generated.");
|
||||
failures++;
|
||||
continue;
|
||||
}
|
||||
code = res.substring(res.indexOf('```')+3, res.lastIndexOf('```'));
|
||||
const result = await this.stageCode(code);
|
||||
const executionModuleExports = result.func;
|
||||
let src_lint_copy = result.src_lint_copy;
|
||||
const analysisResult = await this.lintCode(src_lint_copy);
|
||||
if (analysisResult) {
|
||||
const message = 'Error: Code lint error:'+'\n'+analysisResult+'\nPlease try again.';
|
||||
console.warn("Linting error:"+'\n'+analysisResult+'\n');
|
||||
messages.push({ role: 'system', content: message });
|
||||
continue;
|
||||
}
|
||||
if (!executionModuleExports) {
|
||||
agent_history.add('system', 'Failed to stage code, something is wrong.');
|
||||
console.warn("Failed to stage code, something is wrong.");
|
||||
return {success: false, message: null, interrupted: false, timedout: false};
|
||||
}
|
||||
|
||||
code_return = await this.agent.actions.runAction('newAction', async () => {
|
||||
return await executionModuleExports.main(this.agent.bot);
|
||||
}, { timeout: settings.code_timeout_mins });
|
||||
if (code_return.interrupted && !code_return.timedout)
|
||||
return { success: false, message: null, interrupted: true, timedout: false };
|
||||
console.log("Code generation result:", code_return.success, code_return.message.toString());
|
||||
|
||||
if (code_return.success) {
|
||||
const summary = "Summary of newAction\nAgent wrote this code: \n```" + this.sanitizeCode(code) + "```\nCode Output:\n" + code_return.message.toString();
|
||||
return { success: true, message: summary, interrupted: false, timedout: false };
|
||||
}
|
||||
|
||||
messages.push({
|
||||
role: 'assistant',
|
||||
content: res
|
||||
});
|
||||
messages.push({
|
||||
role: 'system',
|
||||
content: code_return.message + '\nCode failed. Please try again:'
|
||||
});
|
||||
}
|
||||
return { success: false, message: null, interrupted: false, timedout: true };
|
||||
}
|
||||
}
|
|
@ -32,13 +32,22 @@ export const actionsList = [
|
|||
params: {
|
||||
'prompt': { type: 'string', description: 'A natural language prompt to guide code generation. Make a detailed step-by-step plan.' }
|
||||
},
|
||||
perform: async function (agent, prompt) {
|
||||
perform: async function(agent, prompt) {
|
||||
// just ignore prompt - it is now in context in chat history
|
||||
if (!settings.allow_insecure_coding) {
|
||||
agent.openChat('newAction is disabled. Enable with allow_insecure_coding=true in settings.js');
|
||||
return 'newAction not allowed! Code writing is disabled in settings. Notify the user.';
|
||||
}
|
||||
return await agent.coder.generateCode(agent.history);
|
||||
return "newAction not allowed! Code writing is disabled in settings. Notify the user.";
|
||||
}
|
||||
let result = "";
|
||||
const actionFn = async () => {
|
||||
try {
|
||||
result = await agent.coder.generateCode(agent.history);
|
||||
} catch (e) {
|
||||
result = 'Error generating code: ' + e.toString();
|
||||
}
|
||||
};
|
||||
await agent.actions.runAction('action:newAction', actionFn);
|
||||
return result;
|
||||
}
|
||||
},
|
||||
{
|
||||
|
@ -87,7 +96,7 @@ export const actionsList = [
|
|||
'closeness': {type: 'float', description: 'How close to get to the player.', domain: [0, Infinity]}
|
||||
},
|
||||
perform: runAsAction(async (agent, player_name, closeness) => {
|
||||
return await skills.goToPlayer(agent.bot, player_name, closeness);
|
||||
await skills.goToPlayer(agent.bot, player_name, closeness);
|
||||
})
|
||||
},
|
||||
{
|
||||
|
@ -407,18 +416,52 @@ export const actionsList = [
|
|||
convoManager.endConversation(player_name);
|
||||
return `Converstaion with ${player_name} ended.`;
|
||||
}
|
||||
},
|
||||
// { // commented for now, causes confusion with goal command
|
||||
// name: '!npcGoal',
|
||||
// description: 'Set a simple goal for an item or building to automatically work towards. Do not use for complex goals.',
|
||||
// params: {
|
||||
// 'name': { type: 'string', description: 'The name of the goal to set. Can be item or building name. If empty will automatically choose a goal.' },
|
||||
// 'quantity': { type: 'int', description: 'The quantity of the goal to set. Default is 1.', domain: [1, Number.MAX_SAFE_INTEGER] }
|
||||
// },
|
||||
// perform: async function (agent, name=null, quantity=1) {
|
||||
// await agent.npc.setGoal(name, quantity);
|
||||
// agent.bot.emit('idle'); // to trigger the goal
|
||||
// return 'Set npc goal: ' + agent.npc.data.curr_goal.name;
|
||||
// }
|
||||
// },
|
||||
},
|
||||
{
|
||||
name: '!lookAtPlayer',
|
||||
description: 'Look at a player or look in the same direction as the player.',
|
||||
params: {
|
||||
'player_name': { type: 'string', description: 'Name of the target player' },
|
||||
'direction': {
|
||||
type: 'string',
|
||||
description: 'How to look ("at": look at the player, "with": look in the same direction as the player)',
|
||||
}
|
||||
},
|
||||
perform: async function(agent, player_name, direction) {
|
||||
if (direction !== 'at' && direction !== 'with') {
|
||||
return "Invalid direction. Use 'at' or 'with'.";
|
||||
}
|
||||
let result = "";
|
||||
const actionFn = async () => {
|
||||
result = await agent.vision_interpreter.lookAtPlayer(player_name, direction);
|
||||
};
|
||||
await agent.actions.runAction('action:lookAtPlayer', actionFn);
|
||||
return result;
|
||||
}
|
||||
},
|
||||
{
|
||||
name: '!lookAtPosition',
|
||||
description: 'Look at specified coordinates.',
|
||||
params: {
|
||||
'x': { type: 'int', description: 'x coordinate' },
|
||||
'y': { type: 'int', description: 'y coordinate' },
|
||||
'z': { type: 'int', description: 'z coordinate' }
|
||||
},
|
||||
perform: async function(agent, x, y, z) {
|
||||
let result = "";
|
||||
const actionFn = async () => {
|
||||
result = await agent.vision_interpreter.lookAtPosition(x, y, z);
|
||||
};
|
||||
await agent.actions.runAction('action:lookAtPosition', actionFn);
|
||||
return result;
|
||||
}
|
||||
},
|
||||
{
|
||||
name: '!digDown',
|
||||
description: 'Digs down a specified distance. Will stop if it reaches lava, water, or a fall of >=4 blocks below the bot.',
|
||||
params: {'distance': { type: 'int', description: 'Distance to dig down', domain: [1, Number.MAX_SAFE_INTEGER] }},
|
||||
perform: runAsAction(async (agent, distance) => {
|
||||
await skills.digDown(agent.bot, distance)
|
||||
})
|
||||
},
|
||||
];
|
||||
|
|
|
@ -3,6 +3,7 @@ import * as mc from '../../utils/mcdata.js';
|
|||
import { getCommandDocs } from './index.js';
|
||||
import convoManager from '../conversation.js';
|
||||
import { checkLevelBlueprint, checkBlueprint } from '../task_types/construction_tasks.js';
|
||||
import { load } from 'cheerio';
|
||||
|
||||
const pad = (str) => {
|
||||
return '\n' + str + '\n';
|
||||
|
@ -251,10 +252,38 @@ export const queryList = [
|
|||
// Generate crafting plan
|
||||
let craftingPlan = mc.getDetailedCraftingPlan(target_item, quantity, curr_inventory);
|
||||
craftingPlan = prefixMessage + craftingPlan;
|
||||
console.log(craftingPlan);
|
||||
return pad(craftingPlan);
|
||||
},
|
||||
},
|
||||
{
|
||||
name: '!searchWiki',
|
||||
description: 'Search the Minecraft Wiki for the given query.',
|
||||
params: {
|
||||
'query': { type: 'string', description: 'The query to search for.' }
|
||||
},
|
||||
perform: async function (agent, query) {
|
||||
const url = `https://minecraft.wiki/w/${query}`
|
||||
try {
|
||||
const response = await fetch(url);
|
||||
if (response.status === 404) {
|
||||
return `${query} was not found on the Minecraft Wiki. Try adjusting your search term.`;
|
||||
}
|
||||
const html = await response.text();
|
||||
const $ = load(html);
|
||||
|
||||
const parserOutput = $("div.mw-parser-output");
|
||||
|
||||
parserOutput.find("table.navbox").remove();
|
||||
|
||||
const divContent = parserOutput.text();
|
||||
|
||||
return divContent.trim();
|
||||
} catch (error) {
|
||||
console.error("Error fetching or parsing HTML:", error);
|
||||
return `The following error occurred: ${error}`
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
name: '!help',
|
||||
description: 'Lists all available commands and their descriptions.',
|
||||
|
|
|
@ -8,6 +8,7 @@ export class SkillLibrary {
|
|||
this.embedding_model = embedding_model;
|
||||
this.skill_docs_embeddings = {};
|
||||
this.skill_docs = null;
|
||||
this.always_show_skills = ['skills.placeBlock', 'skills.wait', 'skills.breakBlockAt']
|
||||
}
|
||||
async initSkillLibrary() {
|
||||
const skillDocs = getSkillDocs();
|
||||
|
@ -26,6 +27,10 @@ export class SkillLibrary {
|
|||
this.embedding_model = null;
|
||||
}
|
||||
}
|
||||
this.always_show_skills_docs = {};
|
||||
for (const skillName of this.always_show_skills) {
|
||||
this.always_show_skills_docs[skillName] = this.skill_docs.find(doc => doc.includes(skillName));
|
||||
}
|
||||
}
|
||||
|
||||
async getAllSkillDocs() {
|
||||
|
@ -36,16 +41,24 @@ export class SkillLibrary {
|
|||
if(!message) // use filler message if none is provided
|
||||
message = '(no message)';
|
||||
let skill_doc_similarities = [];
|
||||
if (!this.embedding_model) {
|
||||
skill_doc_similarities = Object.keys(this.skill_docs)
|
||||
|
||||
if (select_num === -1) {
|
||||
skill_doc_similarities = Object.keys(this.skill_docs_embeddings)
|
||||
.map(doc_key => ({
|
||||
doc_key,
|
||||
similarity_score: 0
|
||||
}));
|
||||
}
|
||||
else if (!this.embedding_model) {
|
||||
skill_doc_similarities = Object.keys(this.skill_docs_embeddings)
|
||||
.map(doc_key => ({
|
||||
doc_key,
|
||||
similarity_score: wordOverlapScore(message, this.skill_docs[doc_key])
|
||||
similarity_score: wordOverlapScore(message, this.skill_docs_embeddings[doc_key])
|
||||
}))
|
||||
.sort((a, b) => b.similarity_score - a.similarity_score);
|
||||
}
|
||||
else {
|
||||
let latest_message_embedding = '';
|
||||
let latest_message_embedding = await this.embedding_model.embed(message);
|
||||
skill_doc_similarities = Object.keys(this.skill_docs_embeddings)
|
||||
.map(doc_key => ({
|
||||
doc_key,
|
||||
|
@ -55,15 +68,26 @@ export class SkillLibrary {
|
|||
}
|
||||
|
||||
let length = skill_doc_similarities.length;
|
||||
if (typeof select_num !== 'number' || isNaN(select_num) || select_num < 0) {
|
||||
if (select_num === -1 || select_num > length) {
|
||||
select_num = length;
|
||||
} else {
|
||||
select_num = Math.min(Math.floor(select_num), length);
|
||||
}
|
||||
let selected_docs = skill_doc_similarities.slice(0, select_num);
|
||||
let relevant_skill_docs = '#### RELEVENT DOCS INFO ###\nThe following functions are listed in descending order of relevance.\n';
|
||||
relevant_skill_docs += 'SkillDocs:\n'
|
||||
relevant_skill_docs += selected_docs.map(doc => `${doc.doc_key}`).join('\n### ');
|
||||
// Get initial docs from similarity scores
|
||||
let selected_docs = new Set(skill_doc_similarities.slice(0, select_num).map(doc => doc.doc_key));
|
||||
|
||||
// Add always show docs
|
||||
Object.values(this.always_show_skills_docs).forEach(doc => {
|
||||
if (doc) {
|
||||
selected_docs.add(doc);
|
||||
}
|
||||
});
|
||||
|
||||
let relevant_skill_docs = '#### RELEVANT CODE DOCS ###\nThe following functions are available to use:\n';
|
||||
relevant_skill_docs += Array.from(selected_docs).join('\n### ');
|
||||
|
||||
console.log('Selected skill docs:', Array.from(selected_docs).map(doc => {
|
||||
const first_line_break = doc.indexOf('\n');
|
||||
return first_line_break > 0 ? doc.substring(0, first_line_break) : doc;
|
||||
}));
|
||||
return relevant_skill_docs;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -582,12 +582,18 @@ export async function placeBlock(bot, blockType, x, y, z, placeOn='bottom', dont
|
|||
* await skills.placeBlock(bot, "oak_log", p.x + 2, p.y, p.x);
|
||||
* await skills.placeBlock(bot, "torch", p.x + 1, p.y, p.x, 'side');
|
||||
**/
|
||||
if (!mc.getBlockId(blockType)) {
|
||||
if (!mc.getBlockId(blockType) && blockType !== 'air') {
|
||||
log(bot, `Invalid block type: ${blockType}.`);
|
||||
return false;
|
||||
}
|
||||
|
||||
const target_dest = new Vec3(Math.floor(x), Math.floor(y), Math.floor(z));
|
||||
|
||||
if (blockType === 'air') {
|
||||
log(bot, `Placing air (removing block) at ${target_dest}.`);
|
||||
return await breakBlockAt(bot, x, y, z);
|
||||
}
|
||||
|
||||
if (bot.modes.isOn('cheat') && !dontCheat) {
|
||||
if (bot.restrict_to_inventory) {
|
||||
let block = bot.inventory.items().find(item => item.name === blockType);
|
||||
|
@ -1019,10 +1025,34 @@ export async function goToPosition(bot, x, y, z, min_distance=2) {
|
|||
log(bot, `Teleported to ${x}, ${y}, ${z}.`);
|
||||
return true;
|
||||
}
|
||||
bot.pathfinder.setMovements(new pf.Movements(bot));
|
||||
await bot.pathfinder.goto(new pf.goals.GoalNear(x, y, z, min_distance));
|
||||
log(bot, `You have reached at ${x}, ${y}, ${z}.`);
|
||||
return true;
|
||||
|
||||
const movements = new pf.Movements(bot);
|
||||
bot.pathfinder.setMovements(movements);
|
||||
|
||||
const checkProgress = () => {
|
||||
if (bot.targetDigBlock) {
|
||||
const targetBlock = bot.targetDigBlock;
|
||||
const itemId = bot.heldItem ? bot.heldItem.type : null;
|
||||
if (!targetBlock.canHarvest(itemId)) {
|
||||
log(bot, `Pathfinding stopped: Cannot break ${targetBlock.name} with current tools.`);
|
||||
bot.pathfinder.stop();
|
||||
bot.stopDigging();
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const progressInterval = setInterval(checkProgress, 1000);
|
||||
|
||||
try {
|
||||
await bot.pathfinder.goto(new pf.goals.GoalNear(x, y, z, min_distance));
|
||||
log(bot, `You have reached at ${x}, ${y}, ${z}.`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
log(bot, `Pathfinding stopped: ${err.message}.`);
|
||||
return false;
|
||||
} finally {
|
||||
clearInterval(progressInterval);
|
||||
}
|
||||
}
|
||||
|
||||
export async function goToNearestBlock(bot, blockType, min_distance=2, range=64) {
|
||||
|
@ -1046,7 +1076,7 @@ export async function goToNearestBlock(bot, blockType, min_distance=2, range=64
|
|||
log(bot, `Could not find any ${blockType} in ${range} blocks.`);
|
||||
return false;
|
||||
}
|
||||
log(bot, `Found ${blockType} at ${block.position}.`);
|
||||
log(bot, `Found ${blockType} at ${block.position}. Navigating...`);
|
||||
await goToPosition(bot, block.position.x, block.position.y, block.position.z, min_distance);
|
||||
return true;
|
||||
|
||||
|
@ -1415,3 +1445,60 @@ export async function activateNearestBlock(bot, type) {
|
|||
log(bot, `Activated ${type} at x:${block.position.x.toFixed(1)}, y:${block.position.y.toFixed(1)}, z:${block.position.z.toFixed(1)}.`);
|
||||
return true;
|
||||
}
|
||||
|
||||
export async function digDown(bot, distance = 10) {
|
||||
/**
|
||||
* Digs down a specified distance. Will stop if it reaches lava, water, or a fall of >=4 blocks below the bot.
|
||||
* @param {MinecraftBot} bot, reference to the minecraft bot.
|
||||
* @param {int} distance, distance to dig down.
|
||||
* @returns {Promise<boolean>} true if successfully dug all the way down.
|
||||
* @example
|
||||
* await skills.digDown(bot, 10);
|
||||
**/
|
||||
|
||||
let start_block_pos = bot.blockAt(bot.entity.position).position;
|
||||
for (let i = 1; i <= distance; i++) {
|
||||
const targetBlock = bot.blockAt(start_block_pos.offset(0, -i, 0));
|
||||
let belowBlock = bot.blockAt(start_block_pos.offset(0, -i-1, 0));
|
||||
|
||||
if (!targetBlock || !belowBlock) {
|
||||
log(bot, `Dug down ${i-1} blocks, but reached the end of the world.`);
|
||||
return true;
|
||||
}
|
||||
|
||||
// Check for lava, water
|
||||
if (targetBlock.name === 'lava' || targetBlock.name === 'water' ||
|
||||
belowBlock.name === 'lava' || belowBlock.name === 'water') {
|
||||
log(bot, `Dug down ${i-1} blocks, but reached ${belowBlock ? belowBlock.name : '(lava/water)'}`)
|
||||
return false;
|
||||
}
|
||||
|
||||
const MAX_FALL_BLOCKS = 2;
|
||||
let num_fall_blocks = 0;
|
||||
for (let j = 0; j <= MAX_FALL_BLOCKS; j++) {
|
||||
if (!belowBlock || (belowBlock.name !== 'air' && belowBlock.name !== 'cave_air')) {
|
||||
break;
|
||||
}
|
||||
num_fall_blocks++;
|
||||
belowBlock = bot.blockAt(belowBlock.position.offset(0, -1, 0));
|
||||
}
|
||||
if (num_fall_blocks > MAX_FALL_BLOCKS) {
|
||||
log(bot, `Dug down ${i-1} blocks, but reached a drop below the next block.`);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (targetBlock.name === 'air' || targetBlock.name === 'cave_air') {
|
||||
log(bot, 'Skipping air block');
|
||||
console.log(targetBlock.position);
|
||||
continue;
|
||||
}
|
||||
|
||||
let dug = await breakBlockAt(bot, targetBlock.position.x, targetBlock.position.y, targetBlock.position.z);
|
||||
if (!dug) {
|
||||
log(bot, 'Failed to dig block at position:' + targetBlock.position);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
log(bot, `Dug down ${distance} blocks.`);
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -83,6 +83,7 @@ const modes_list = [
|
|||
stuck_time: 0,
|
||||
last_time: Date.now(),
|
||||
max_stuck_time: 20,
|
||||
prev_dig_block: null,
|
||||
update: async function (agent) {
|
||||
if (agent.isIdle()) {
|
||||
this.prev_location = null;
|
||||
|
@ -90,12 +91,17 @@ const modes_list = [
|
|||
return; // don't get stuck when idle
|
||||
}
|
||||
const bot = agent.bot;
|
||||
if (this.prev_location && this.prev_location.distanceTo(bot.entity.position) < this.distance) {
|
||||
const cur_dig_block = bot.targetDigBlock;
|
||||
if (cur_dig_block && !this.prev_dig_block) {
|
||||
this.prev_dig_block = cur_dig_block;
|
||||
}
|
||||
if (this.prev_location && this.prev_location.distanceTo(bot.entity.position) < this.distance && cur_dig_block == this.prev_dig_block) {
|
||||
this.stuck_time += (Date.now() - this.last_time) / 1000;
|
||||
}
|
||||
else {
|
||||
this.prev_location = bot.entity.position.clone();
|
||||
this.stuck_time = 0;
|
||||
this.prev_dig_block = null;
|
||||
}
|
||||
if (this.stuck_time > this.max_stuck_time) {
|
||||
say(agent, 'I\'m stuck!');
|
||||
|
|
43
src/agent/speak.js
Normal file
43
src/agent/speak.js
Normal file
|
@ -0,0 +1,43 @@
|
|||
import { exec } from 'child_process';
|
||||
|
||||
let speakingQueue = [];
|
||||
let isSpeaking = false;
|
||||
|
||||
export function say(textToSpeak) {
|
||||
speakingQueue.push(textToSpeak);
|
||||
if (!isSpeaking) {
|
||||
processQueue();
|
||||
}
|
||||
}
|
||||
|
||||
function processQueue() {
|
||||
if (speakingQueue.length === 0) {
|
||||
isSpeaking = false;
|
||||
return;
|
||||
}
|
||||
|
||||
isSpeaking = true;
|
||||
const textToSpeak = speakingQueue.shift();
|
||||
const isWin = process.platform === "win32";
|
||||
const isMac = process.platform === "darwin";
|
||||
|
||||
let command;
|
||||
|
||||
if (isWin) {
|
||||
command = `powershell -Command "Add-Type -AssemblyName System.Speech; $s = New-Object System.Speech.Synthesis.SpeechSynthesizer; $s.Rate = 2; $s.Speak(\\"${textToSpeak}\\"); $s.Dispose()"`;
|
||||
} else if (isMac) {
|
||||
command = `say "${textToSpeak}"`;
|
||||
} else {
|
||||
command = `espeak "${textToSpeak}"`;
|
||||
}
|
||||
|
||||
exec(command, (error, stdout, stderr) => {
|
||||
if (error) {
|
||||
console.error(`Error: ${error.message}`);
|
||||
console.error(`${error.stack}`);
|
||||
} else if (stderr) {
|
||||
console.error(`Error: ${stderr}`);
|
||||
}
|
||||
processQueue(); // Continue with the next message in the queue
|
||||
});
|
||||
}
|
|
@ -1,8 +1,8 @@
|
|||
import settings from '../../settings.js';
|
||||
import settings from '../../../settings.js';
|
||||
import prismarineViewer from 'prismarine-viewer';
|
||||
const mineflayerViewer = prismarineViewer.mineflayer;
|
||||
|
||||
export function addViewer(bot, count_id) {
|
||||
export function addBrowserViewer(bot, count_id) {
|
||||
if (settings.show_bot_views)
|
||||
mineflayerViewer(bot, { port: 3000+count_id, firstPerson: true, });
|
||||
}
|
78
src/agent/vision/camera.js
Normal file
78
src/agent/vision/camera.js
Normal file
|
@ -0,0 +1,78 @@
|
|||
import { Viewer } from 'prismarine-viewer/viewer/lib/viewer.js';
|
||||
import { WorldView } from 'prismarine-viewer/viewer/lib/worldview.js';
|
||||
import { getBufferFromStream } from 'prismarine-viewer/viewer/lib/simpleUtils.js';
|
||||
|
||||
import THREE from 'three';
|
||||
import { createCanvas } from 'node-canvas-webgl/lib/index.js';
|
||||
import fs from 'fs/promises';
|
||||
import { Vec3 } from 'vec3';
|
||||
import { EventEmitter } from 'events';
|
||||
|
||||
import worker_threads from 'worker_threads';
|
||||
global.Worker = worker_threads.Worker;
|
||||
|
||||
|
||||
export class Camera extends EventEmitter {
|
||||
constructor (bot, fp) {
|
||||
super();
|
||||
this.bot = bot;
|
||||
this.fp = fp;
|
||||
this.viewDistance = 12;
|
||||
this.width = 800;
|
||||
this.height = 512;
|
||||
this.canvas = createCanvas(this.width, this.height);
|
||||
this.renderer = new THREE.WebGLRenderer({ canvas: this.canvas });
|
||||
this.viewer = new Viewer(this.renderer);
|
||||
this._init().then(() => {
|
||||
this.emit('ready');
|
||||
})
|
||||
}
|
||||
|
||||
async _init () {
|
||||
const botPos = this.bot.entity.position;
|
||||
const center = new Vec3(botPos.x, botPos.y+this.bot.entity.height, botPos.z);
|
||||
this.viewer.setVersion(this.bot.version);
|
||||
// Load world
|
||||
const worldView = new WorldView(this.bot.world, this.viewDistance, center);
|
||||
this.viewer.listen(worldView);
|
||||
worldView.listenToBot(this.bot);
|
||||
await worldView.init(center);
|
||||
this.worldView = worldView;
|
||||
}
|
||||
|
||||
async capture() {
|
||||
const center = new Vec3(this.bot.entity.position.x, this.bot.entity.position.y+this.bot.entity.height, this.bot.entity.position.z);
|
||||
this.viewer.camera.position.set(center.x, center.y, center.z);
|
||||
await this.worldView.updatePosition(center);
|
||||
this.viewer.setFirstPersonCamera(this.bot.entity.position, this.bot.entity.yaw, this.bot.entity.pitch);
|
||||
this.viewer.update();
|
||||
this.renderer.render(this.viewer.scene, this.viewer.camera);
|
||||
|
||||
const imageStream = this.canvas.createJPEGStream({
|
||||
bufsize: 4096,
|
||||
quality: 100,
|
||||
progressive: false
|
||||
});
|
||||
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
|
||||
const filename = `screenshot_${timestamp}`;
|
||||
|
||||
const buf = await getBufferFromStream(imageStream);
|
||||
await this._ensureScreenshotDirectory();
|
||||
await fs.writeFile(`${this.fp}/${filename}.jpg`, buf);
|
||||
console.log('saved', filename);
|
||||
return filename;
|
||||
}
|
||||
|
||||
async _ensureScreenshotDirectory() {
|
||||
let stats;
|
||||
try {
|
||||
stats = await fs.stat(this.fp);
|
||||
} catch (e) {
|
||||
if (!stats?.isDirectory()) {
|
||||
await fs.mkdir(this.fp);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
81
src/agent/vision/vision_interpreter.js
Normal file
81
src/agent/vision/vision_interpreter.js
Normal file
|
@ -0,0 +1,81 @@
|
|||
import { Vec3 } from 'vec3';
|
||||
import { Camera } from "./camera.js";
|
||||
import fs from 'fs';
|
||||
|
||||
export class VisionInterpreter {
|
||||
constructor(agent, allow_vision) {
|
||||
this.agent = agent;
|
||||
this.allow_vision = allow_vision;
|
||||
this.fp = './bots/'+agent.name+'/screenshots/';
|
||||
if (allow_vision) {
|
||||
this.camera = new Camera(agent.bot, this.fp);
|
||||
}
|
||||
}
|
||||
|
||||
async lookAtPlayer(player_name, direction) {
|
||||
if (!this.allow_vision || !this.agent.prompter.vision_model.sendVisionRequest) {
|
||||
return "Vision is disabled. Use other methods to describe the environment.";
|
||||
}
|
||||
let result = "";
|
||||
const bot = this.agent.bot;
|
||||
const player = bot.players[player_name]?.entity;
|
||||
if (!player) {
|
||||
return `Could not find player ${player_name}`;
|
||||
}
|
||||
|
||||
let filename;
|
||||
if (direction === 'with') {
|
||||
await bot.look(player.yaw, player.pitch);
|
||||
result = `Looking in the same direction as ${player_name}\n`;
|
||||
filename = await this.camera.capture();
|
||||
} else {
|
||||
await bot.lookAt(new Vec3(player.position.x, player.position.y + player.height, player.position.z));
|
||||
result = `Looking at player ${player_name}\n`;
|
||||
filename = await this.camera.capture();
|
||||
|
||||
}
|
||||
|
||||
return result + `Image analysis: "${await this.analyzeImage(filename)}"`;
|
||||
}
|
||||
|
||||
async lookAtPosition(x, y, z) {
|
||||
if (!this.allow_vision || !this.agent.prompter.vision_model.sendVisionRequest) {
|
||||
return "Vision is disabled. Use other methods to describe the environment.";
|
||||
}
|
||||
let result = "";
|
||||
const bot = this.agent.bot;
|
||||
await bot.lookAt(new Vec3(x, y + 2, z));
|
||||
result = `Looking at coordinate ${x}, ${y}, ${z}\n`;
|
||||
|
||||
let filename = await this.camera.capture();
|
||||
|
||||
return result + `Image analysis: "${await this.analyzeImage(filename)}"`;
|
||||
}
|
||||
|
||||
getCenterBlockInfo() {
|
||||
const bot = this.agent.bot;
|
||||
const maxDistance = 128; // Maximum distance to check for blocks
|
||||
const targetBlock = bot.blockAtCursor(maxDistance);
|
||||
|
||||
if (targetBlock) {
|
||||
return `Block at center view: ${targetBlock.name} at (${targetBlock.position.x}, ${targetBlock.position.y}, ${targetBlock.position.z})`;
|
||||
} else {
|
||||
return "No block in center view";
|
||||
}
|
||||
}
|
||||
|
||||
async analyzeImage(filename) {
|
||||
try {
|
||||
const imageBuffer = fs.readFileSync(`${this.fp}/${filename}.jpg`);
|
||||
const messages = this.agent.history.getHistory();
|
||||
|
||||
const blockInfo = this.getCenterBlockInfo();
|
||||
const result = await this.agent.prompter.promptVision(messages, imageBuffer);
|
||||
return result + `\n${blockInfo}`;
|
||||
|
||||
} catch (error) {
|
||||
console.warn('Error reading image:', error);
|
||||
return `Error reading image: ${error.message}`;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -26,7 +26,7 @@ export class Claude {
|
|||
this.params.max_tokens = this.params.thinking.budget_tokens + 1000;
|
||||
// max_tokens must be greater than thinking.budget_tokens
|
||||
} else {
|
||||
this.params.max_tokens = 16000;
|
||||
this.params.max_tokens = 4096;
|
||||
}
|
||||
}
|
||||
const resp = await this.anthropic.messages.create({
|
||||
|
@ -47,16 +47,40 @@ export class Claude {
|
|||
}
|
||||
}
|
||||
catch (err) {
|
||||
if (err.message.includes("does not support image input")) {
|
||||
res = "Vision is only supported by certain models.";
|
||||
} else {
|
||||
res = "My brain disconnected, try again.";
|
||||
}
|
||||
console.log(err);
|
||||
res = 'My brain disconnected, try again.';
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
async sendVisionRequest(turns, systemMessage, imageBuffer) {
|
||||
const imageMessages = [...turns];
|
||||
imageMessages.push({
|
||||
role: "user",
|
||||
content: [
|
||||
{
|
||||
type: "text",
|
||||
text: systemMessage
|
||||
},
|
||||
{
|
||||
type: "image",
|
||||
source: {
|
||||
type: "base64",
|
||||
media_type: "image/jpeg",
|
||||
data: imageBuffer.toString('base64')
|
||||
}
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
return this.sendRequest(imageMessages, systemMessage);
|
||||
}
|
||||
|
||||
async embed(text) {
|
||||
throw new Error('Embeddings are not supported by Claude.');
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -39,7 +39,6 @@ export class Gemini {
|
|||
model: this.model_name || "gemini-1.5-flash",
|
||||
// systemInstruction does not work bc google is trash
|
||||
};
|
||||
|
||||
if (this.url) {
|
||||
model = this.genAI.getGenerativeModel(
|
||||
modelConfig,
|
||||
|
@ -72,12 +71,76 @@ export class Gemini {
|
|||
}
|
||||
});
|
||||
const response = await result.response;
|
||||
const text = response.text();
|
||||
let text;
|
||||
|
||||
// Handle "thinking" models since they smart
|
||||
if (this.model_name && this.model_name.includes("thinking")) {
|
||||
if (
|
||||
response.candidates &&
|
||||
response.candidates.length > 0 &&
|
||||
response.candidates[0].content &&
|
||||
response.candidates[0].content.parts &&
|
||||
response.candidates[0].content.parts.length > 1
|
||||
) {
|
||||
text = response.candidates[0].content.parts[1].text;
|
||||
} else {
|
||||
console.warn("Unexpected response structure for thinking model:", response);
|
||||
text = response.text();
|
||||
}
|
||||
} else {
|
||||
text = response.text();
|
||||
}
|
||||
|
||||
console.log('Received.');
|
||||
|
||||
return text;
|
||||
}
|
||||
|
||||
async sendVisionRequest(turns, systemMessage, imageBuffer) {
|
||||
let model;
|
||||
if (this.url) {
|
||||
model = this.genAI.getGenerativeModel(
|
||||
{ model: this.model_name || "gemini-1.5-flash" },
|
||||
{ baseUrl: this.url },
|
||||
{ safetySettings: this.safetySettings }
|
||||
);
|
||||
} else {
|
||||
model = this.genAI.getGenerativeModel(
|
||||
{ model: this.model_name || "gemini-1.5-flash" },
|
||||
{ safetySettings: this.safetySettings }
|
||||
);
|
||||
}
|
||||
|
||||
const imagePart = {
|
||||
inlineData: {
|
||||
data: imageBuffer.toString('base64'),
|
||||
mimeType: 'image/jpeg'
|
||||
}
|
||||
};
|
||||
|
||||
const stop_seq = '***';
|
||||
const prompt = toSinglePrompt(turns, systemMessage, stop_seq, 'model');
|
||||
let res = null;
|
||||
try {
|
||||
console.log('Awaiting Google API vision response...');
|
||||
const result = await model.generateContent([prompt, imagePart]);
|
||||
const response = await result.response;
|
||||
const text = response.text();
|
||||
console.log('Received.');
|
||||
if (!text.includes(stop_seq)) return text;
|
||||
const idx = text.indexOf(stop_seq);
|
||||
res = text.slice(0, idx);
|
||||
} catch (err) {
|
||||
console.log(err);
|
||||
if (err.message.includes("Image input modality is not enabled for models/")) {
|
||||
res = "Vision is only supported by certain models.";
|
||||
} else {
|
||||
res = "An unexpected error occurred, please try again.";
|
||||
}
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
async embed(text) {
|
||||
let model;
|
||||
if (this.url) {
|
||||
|
@ -94,4 +157,4 @@ export class Gemini {
|
|||
const result = await model.embedContent(text);
|
||||
return result.embedding.values;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
70
src/models/glhf.js
Normal file
70
src/models/glhf.js
Normal file
|
@ -0,0 +1,70 @@
|
|||
import OpenAIApi from 'openai';
|
||||
import { getKey } from '../utils/keys.js';
|
||||
|
||||
export class GLHF {
|
||||
constructor(model_name, url) {
|
||||
this.model_name = model_name;
|
||||
const apiKey = getKey('GHLF_API_KEY');
|
||||
if (!apiKey) {
|
||||
throw new Error('API key not found. Please check keys.json and ensure GHLF_API_KEY is defined.');
|
||||
}
|
||||
this.openai = new OpenAIApi({
|
||||
apiKey,
|
||||
baseURL: url || "https://glhf.chat/api/openai/v1"
|
||||
});
|
||||
}
|
||||
|
||||
async sendRequest(turns, systemMessage, stop_seq = '***') {
|
||||
// Construct the message array for the API request.
|
||||
let messages = [{ role: 'system', content: systemMessage }].concat(turns);
|
||||
const pack = {
|
||||
model: this.model_name || "hf:meta-llama/Llama-3.1-405B-Instruct",
|
||||
messages,
|
||||
stop: [stop_seq]
|
||||
};
|
||||
|
||||
const maxAttempts = 5;
|
||||
let attempt = 0;
|
||||
let finalRes = null;
|
||||
|
||||
while (attempt < maxAttempts) {
|
||||
attempt++;
|
||||
console.log(`Awaiting glhf.chat API response... (attempt: ${attempt})`);
|
||||
try {
|
||||
let completion = await this.openai.chat.completions.create(pack);
|
||||
if (completion.choices[0].finish_reason === 'length') {
|
||||
throw new Error('Context length exceeded');
|
||||
}
|
||||
let res = completion.choices[0].message.content;
|
||||
// If there's an open <think> tag without a corresponding </think>, retry.
|
||||
if (res.includes("<think>") && !res.includes("</think>")) {
|
||||
console.warn("Partial <think> block detected. Re-generating...");
|
||||
continue;
|
||||
}
|
||||
// If there's a closing </think> tag but no opening <think>, prepend one.
|
||||
if (res.includes("</think>") && !res.includes("<think>")) {
|
||||
res = "<think>" + res;
|
||||
}
|
||||
finalRes = res.replace(/<\|separator\|>/g, '*no response*');
|
||||
break; // Valid response obtained.
|
||||
} catch (err) {
|
||||
if ((err.message === 'Context length exceeded' || err.code === 'context_length_exceeded') && turns.length > 1) {
|
||||
console.log('Context length exceeded, trying again with shorter context.');
|
||||
return await this.sendRequest(turns.slice(1), systemMessage, stop_seq);
|
||||
} else {
|
||||
console.error(err);
|
||||
finalRes = 'My brain disconnected, try again.';
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
if (finalRes === null) {
|
||||
finalRes = "I thought too hard, sorry, try again";
|
||||
}
|
||||
return finalRes;
|
||||
}
|
||||
|
||||
async embed(text) {
|
||||
throw new Error('Embeddings are not supported by glhf.');
|
||||
}
|
||||
}
|
|
@ -48,6 +48,9 @@ export class GPT {
|
|||
if ((err.message == 'Context length exceeded' || err.code == 'context_length_exceeded') && turns.length > 1) {
|
||||
console.log('Context length exceeded, trying again with shorter context.');
|
||||
return await this.sendRequest(turns.slice(1), systemMessage, stop_seq);
|
||||
} else if (err.message.includes('image_url')) {
|
||||
console.log(err);
|
||||
res = 'Vision is only supported by certain models.';
|
||||
} else {
|
||||
console.log(err);
|
||||
res = 'My brain disconnected, try again.';
|
||||
|
@ -56,6 +59,24 @@ export class GPT {
|
|||
return res;
|
||||
}
|
||||
|
||||
async sendVisionRequest(messages, systemMessage, imageBuffer) {
|
||||
const imageMessages = [...messages];
|
||||
imageMessages.push({
|
||||
role: "user",
|
||||
content: [
|
||||
{ type: "text", text: systemMessage },
|
||||
{
|
||||
type: "image_url",
|
||||
image_url: {
|
||||
url: `data:image/jpeg;base64,${imageBuffer.toString('base64')}`
|
||||
}
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
return this.sendRequest(imageMessages, systemMessage);
|
||||
}
|
||||
|
||||
async embed(text) {
|
||||
if (text.length > 8191)
|
||||
text = text.slice(0, 8191);
|
||||
|
@ -66,6 +87,7 @@ export class GPT {
|
|||
});
|
||||
return embedding.data[0].embedding;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -43,6 +43,9 @@ export class Grok {
|
|||
if ((err.message == 'Context length exceeded' || err.code == 'context_length_exceeded') && turns.length > 1) {
|
||||
console.log('Context length exceeded, trying again with shorter context.');
|
||||
return await this.sendRequest(turns.slice(1), systemMessage, stop_seq);
|
||||
} else if (err.message.includes('The model expects a single `text` element per message.')) {
|
||||
console.log(err);
|
||||
res = 'Vision is only supported by certain models.';
|
||||
} else {
|
||||
console.log(err);
|
||||
res = 'My brain disconnected, try again.';
|
||||
|
@ -51,6 +54,24 @@ export class Grok {
|
|||
// sometimes outputs special token <|separator|>, just replace it
|
||||
return res.replace(/<\|separator\|>/g, '*no response*');
|
||||
}
|
||||
|
||||
async sendVisionRequest(messages, systemMessage, imageBuffer) {
|
||||
const imageMessages = [...messages];
|
||||
imageMessages.push({
|
||||
role: "user",
|
||||
content: [
|
||||
{ type: "text", text: systemMessage },
|
||||
{
|
||||
type: "image_url",
|
||||
image_url: {
|
||||
url: `data:image/jpeg;base64,${imageBuffer.toString('base64')}`
|
||||
}
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
return this.sendRequest(imageMessages, systemMessage);
|
||||
}
|
||||
|
||||
async embed(text) {
|
||||
throw new Error('Embeddings are not supported by Grok.');
|
||||
|
|
|
@ -24,35 +24,27 @@ export class GroqCloudAPI {
|
|||
|
||||
this.groq = new Groq({ apiKey: getKey('GROQCLOUD_API_KEY') });
|
||||
|
||||
|
||||
}
|
||||
|
||||
async sendRequest(turns, systemMessage, stop_seq=null) {
|
||||
async sendRequest(turns, systemMessage, stop_seq = null) {
|
||||
// Construct messages array
|
||||
let messages = [{"role": "system", "content": systemMessage}].concat(turns);
|
||||
|
||||
let messages = [{"role": "system", "content": systemMessage}].concat(turns); // The standard for GroqCloud is just appending to a messages array starting with the system prompt, but
|
||||
// this is perfectly acceptable too, and I recommend it.
|
||||
// I still feel as though I should note it for any future revisions of MindCraft, though.
|
||||
|
||||
// These variables look odd, but they're for the future. Please keep them intact.
|
||||
let raw_res = null;
|
||||
let res = null;
|
||||
let tool_calls = null;
|
||||
|
||||
try {
|
||||
|
||||
console.log("Awaiting Groq response...");
|
||||
|
||||
// Handle deprecated max_tokens parameter
|
||||
if (this.params.max_tokens) {
|
||||
|
||||
console.warn("GROQCLOUD WARNING: A profile is using `max_tokens`. This is deprecated. Please move to `max_completion_tokens`.");
|
||||
this.params.max_completion_tokens = this.params.max_tokens;
|
||||
delete this.params.max_tokens;
|
||||
|
||||
}
|
||||
|
||||
if (!this.params.max_completion_tokens) {
|
||||
|
||||
this.params.max_completion_tokens = 8000; // Set it lower. This is a common theme.
|
||||
|
||||
this.params.max_completion_tokens = 4000;
|
||||
}
|
||||
|
||||
let completion = await this.groq.chat.completions.create({
|
||||
|
@ -63,21 +55,40 @@ export class GroqCloudAPI {
|
|||
...(this.params || {})
|
||||
});
|
||||
|
||||
raw_res = completion.choices[0].message;
|
||||
res = raw_res.content;
|
||||
res = completion.choices[0].message;
|
||||
|
||||
res = res.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
|
||||
}
|
||||
|
||||
catch(err) {
|
||||
|
||||
if (err.message.includes("content must be a string")) {
|
||||
res = "Vision is only supported by certain models.";
|
||||
} else {
|
||||
console.log(this.model_name);
|
||||
res = "My brain disconnected, try again.";
|
||||
}
|
||||
console.log(err);
|
||||
res = "My brain just kinda stopped working. Try again.";
|
||||
|
||||
}
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
async sendVisionRequest(messages, systemMessage, imageBuffer) {
|
||||
const imageMessages = messages.filter(message => message.role !== 'system');
|
||||
imageMessages.push({
|
||||
role: "user",
|
||||
content: [
|
||||
{ type: "text", text: systemMessage },
|
||||
{
|
||||
type: "image_url",
|
||||
image_url: {
|
||||
url: `data:image/jpeg;base64,${imageBuffer.toString('base64')}`
|
||||
}
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
return this.sendRequest(imageMessages);
|
||||
}
|
||||
|
||||
async embed(_) {
|
||||
throw new Error('Embeddings are not supported by Groq.');
|
||||
}
|
||||
|
|
|
@ -1,46 +1,85 @@
|
|||
import {toSinglePrompt} from '../utils/text.js';
|
||||
import {getKey} from '../utils/keys.js';
|
||||
import {HfInference} from "@huggingface/inference";
|
||||
import { toSinglePrompt } from '../utils/text.js';
|
||||
import { getKey } from '../utils/keys.js';
|
||||
import { HfInference } from "@huggingface/inference";
|
||||
|
||||
export class HuggingFace {
|
||||
constructor(model_name, url, params) {
|
||||
this.model_name = model_name.replace('huggingface/','');
|
||||
this.url = url;
|
||||
this.params = params;
|
||||
constructor(model_name, url, params) {
|
||||
// Remove 'huggingface/' prefix if present
|
||||
this.model_name = model_name.replace('huggingface/', '');
|
||||
this.url = url;
|
||||
this.params = params;
|
||||
|
||||
if (this.url) {
|
||||
console.warn("Hugging Face doesn't support custom urls!");
|
||||
if (this.url) {
|
||||
console.warn("Hugging Face doesn't support custom urls!");
|
||||
}
|
||||
|
||||
this.huggingface = new HfInference(getKey('HUGGINGFACE_API_KEY'));
|
||||
}
|
||||
|
||||
async sendRequest(turns, systemMessage) {
|
||||
const stop_seq = '***';
|
||||
// Build a single prompt from the conversation turns
|
||||
const prompt = toSinglePrompt(turns, null, stop_seq);
|
||||
// Fallback model if none was provided
|
||||
const model_name = this.model_name || 'meta-llama/Meta-Llama-3-8B';
|
||||
// Combine system message with the prompt
|
||||
const input = systemMessage + "\n" + prompt;
|
||||
|
||||
// We'll try up to 5 times in case of partial <think> blocks for DeepSeek-R1 models.
|
||||
const maxAttempts = 5;
|
||||
let attempt = 0;
|
||||
let finalRes = null;
|
||||
|
||||
while (attempt < maxAttempts) {
|
||||
attempt++;
|
||||
console.log(`Awaiting Hugging Face API response... (model: ${model_name}, attempt: ${attempt})`);
|
||||
let res = '';
|
||||
try {
|
||||
// Consume the streaming response chunk by chunk
|
||||
for await (const chunk of this.huggingface.chatCompletionStream({
|
||||
model: model_name,
|
||||
messages: [{ role: "user", content: input }],
|
||||
...(this.params || {})
|
||||
})) {
|
||||
res += (chunk.choices[0]?.delta?.content || "");
|
||||
}
|
||||
} catch (err) {
|
||||
console.log(err);
|
||||
res = 'My brain disconnected, try again.';
|
||||
// Break out immediately; we only retry when handling partial <think> tags.
|
||||
break;
|
||||
}
|
||||
|
||||
// If the model is DeepSeek-R1, check for mismatched <think> blocks.
|
||||
const hasOpenTag = res.includes("<think>");
|
||||
const hasCloseTag = res.includes("</think>");
|
||||
|
||||
// If there's a partial mismatch, warn and retry the entire request.
|
||||
if ((hasOpenTag && !hasCloseTag)) {
|
||||
console.warn("Partial <think> block detected. Re-generating...");
|
||||
continue;
|
||||
}
|
||||
|
||||
this.huggingface = new HfInference(getKey('HUGGINGFACE_API_KEY'));
|
||||
}
|
||||
|
||||
async sendRequest(turns, systemMessage) {
|
||||
const stop_seq = '***';
|
||||
const prompt = toSinglePrompt(turns, null, stop_seq);
|
||||
let model_name = this.model_name || 'meta-llama/Meta-Llama-3-8B';
|
||||
|
||||
const input = systemMessage + "\n" + prompt;
|
||||
let res = '';
|
||||
try {
|
||||
console.log('Awaiting Hugging Face API response...');
|
||||
for await (const chunk of this.huggingface.chatCompletionStream({
|
||||
model: model_name,
|
||||
messages: [{ role: "user", content: input }],
|
||||
...(this.params || {})
|
||||
})) {
|
||||
res += (chunk.choices[0]?.delta?.content || "");
|
||||
}
|
||||
} catch (err) {
|
||||
console.log(err);
|
||||
res = 'My brain disconnected, try again.';
|
||||
// If both tags are present, remove the <think> block entirely.
|
||||
if (hasOpenTag && hasCloseTag) {
|
||||
res = res.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
|
||||
}
|
||||
console.log('Received.');
|
||||
console.log(res);
|
||||
return res;
|
||||
|
||||
finalRes = res;
|
||||
break; // Exit loop if we got a valid response.
|
||||
}
|
||||
|
||||
async embed(text) {
|
||||
throw new Error('Embeddings are not supported by HuggingFace.');
|
||||
// If no valid response was obtained after max attempts, assign a fallback.
|
||||
if (finalRes == null) {
|
||||
console.warn("Could not get a valid <think> block or normal response after max attempts.");
|
||||
finalRes = 'I thought too hard, sorry, try again.';
|
||||
}
|
||||
}
|
||||
console.log('Received.');
|
||||
console.log(finalRes);
|
||||
return finalRes;
|
||||
}
|
||||
|
||||
async embed(text) {
|
||||
throw new Error('Embeddings are not supported by HuggingFace.');
|
||||
}
|
||||
}
|
||||
|
|
113
src/models/hyperbolic.js
Normal file
113
src/models/hyperbolic.js
Normal file
|
@ -0,0 +1,113 @@
|
|||
import { getKey } from '../utils/keys.js';
|
||||
|
||||
export class Hyperbolic {
|
||||
constructor(modelName, apiUrl) {
|
||||
this.modelName = modelName || "deepseek-ai/DeepSeek-V3";
|
||||
this.apiUrl = apiUrl || "https://api.hyperbolic.xyz/v1/chat/completions";
|
||||
|
||||
// Retrieve the Hyperbolic API key from keys.js
|
||||
this.apiKey = getKey('HYPERBOLIC_API_KEY');
|
||||
if (!this.apiKey) {
|
||||
throw new Error('HYPERBOLIC_API_KEY not found. Check your keys.js file.');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Sends a chat completion request to the Hyperbolic endpoint.
|
||||
*
|
||||
* @param {Array} turns - An array of message objects, e.g. [{role: 'user', content: 'Hi'}].
|
||||
* @param {string} systemMessage - The system prompt or instruction.
|
||||
* @param {string} stopSeq - A stopping sequence, default '***'.
|
||||
* @returns {Promise<string>} - The model's reply.
|
||||
*/
|
||||
async sendRequest(turns, systemMessage, stopSeq = '***') {
|
||||
// Prepare the messages with a system prompt at the beginning
|
||||
const messages = [{ role: 'system', content: systemMessage }, ...turns];
|
||||
|
||||
// Build the request payload
|
||||
const payload = {
|
||||
model: this.modelName,
|
||||
messages: messages,
|
||||
max_tokens: 8192,
|
||||
temperature: 0.7,
|
||||
top_p: 0.9,
|
||||
stream: false
|
||||
};
|
||||
|
||||
const maxAttempts = 5;
|
||||
let attempt = 0;
|
||||
let finalRes = null;
|
||||
|
||||
while (attempt < maxAttempts) {
|
||||
attempt++;
|
||||
console.log(`Awaiting Hyperbolic API response... (attempt: ${attempt})`);
|
||||
console.log('Messages:', messages);
|
||||
|
||||
let completionContent = null;
|
||||
|
||||
try {
|
||||
const response = await fetch(this.apiUrl, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'Authorization': `Bearer ${this.apiKey}`
|
||||
},
|
||||
body: JSON.stringify(payload)
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP error! status: ${response.status}`);
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
if (data?.choices?.[0]?.finish_reason === 'length') {
|
||||
throw new Error('Context length exceeded');
|
||||
}
|
||||
|
||||
completionContent = data?.choices?.[0]?.message?.content || '';
|
||||
console.log('Received response from Hyperbolic.');
|
||||
} catch (err) {
|
||||
if (
|
||||
(err.message === 'Context length exceeded' || err.code === 'context_length_exceeded') &&
|
||||
turns.length > 1
|
||||
) {
|
||||
console.log('Context length exceeded, trying again with a shorter context...');
|
||||
return await this.sendRequest(turns.slice(1), systemMessage, stopSeq);
|
||||
} else {
|
||||
console.error(err);
|
||||
completionContent = 'My brain disconnected, try again.';
|
||||
}
|
||||
}
|
||||
|
||||
// Check for <think> blocks
|
||||
const hasOpenTag = completionContent.includes("<think>");
|
||||
const hasCloseTag = completionContent.includes("</think>");
|
||||
|
||||
if ((hasOpenTag && !hasCloseTag)) {
|
||||
console.warn("Partial <think> block detected. Re-generating...");
|
||||
continue; // Retry the request
|
||||
}
|
||||
|
||||
if (hasCloseTag && !hasOpenTag) {
|
||||
completionContent = '<think>' + completionContent;
|
||||
}
|
||||
|
||||
if (hasOpenTag && hasCloseTag) {
|
||||
completionContent = completionContent.replace(/<think>[\s\S]*?<\/think>/g, '').trim();
|
||||
}
|
||||
|
||||
finalRes = completionContent.replace(/<\|separator\|>/g, '*no response*');
|
||||
break; // Valid response obtained—exit loop
|
||||
}
|
||||
|
||||
if (finalRes == null) {
|
||||
console.warn("Could not get a valid <think> block or normal response after max attempts.");
|
||||
finalRes = 'I thought too hard, sorry, try again.';
|
||||
}
|
||||
return finalRes;
|
||||
}
|
||||
|
||||
async embed(text) {
|
||||
throw new Error('Embeddings are not supported by Hyperbolic.');
|
||||
}
|
||||
}
|
|
@ -10,45 +10,86 @@ export class Local {
|
|||
}
|
||||
|
||||
async sendRequest(turns, systemMessage) {
|
||||
let model = this.model_name || 'llama3';
|
||||
let model = this.model_name || 'llama3.1'; // Updated to llama3.1, as it is more performant than llama3
|
||||
let messages = strictFormat(turns);
|
||||
messages.unshift({role: 'system', content: systemMessage});
|
||||
let res = null;
|
||||
try {
|
||||
console.log(`Awaiting local response... (model: ${model})`)
|
||||
res = await this.send(this.chat_endpoint, {
|
||||
model: model,
|
||||
messages: messages,
|
||||
stream: false,
|
||||
...(this.params || {})
|
||||
});
|
||||
if (res)
|
||||
res = res['message']['content'];
|
||||
}
|
||||
catch (err) {
|
||||
if (err.message.toLowerCase().includes('context length') && turns.length > 1) {
|
||||
console.log('Context length exceeded, trying again with shorter context.');
|
||||
return await sendRequest(turns.slice(1), systemMessage, stop_seq);
|
||||
} else {
|
||||
console.log(err);
|
||||
res = 'My brain disconnected, try again.';
|
||||
messages.unshift({ role: 'system', content: systemMessage });
|
||||
|
||||
// We'll attempt up to 5 times for models with deepseek-r1-esk reasoning if the <think> tags are mismatched.
|
||||
const maxAttempts = 5;
|
||||
let attempt = 0;
|
||||
let finalRes = null;
|
||||
|
||||
while (attempt < maxAttempts) {
|
||||
attempt++;
|
||||
console.log(`Awaiting local response... (model: ${model}, attempt: ${attempt})`);
|
||||
let res = null;
|
||||
try {
|
||||
res = await this.send(this.chat_endpoint, {
|
||||
model: model,
|
||||
messages: messages,
|
||||
stream: false,
|
||||
...(this.params || {})
|
||||
});
|
||||
if (res) {
|
||||
res = res['message']['content'];
|
||||
} else {
|
||||
res = 'No response data.';
|
||||
}
|
||||
} catch (err) {
|
||||
if (err.message.toLowerCase().includes('context length') && turns.length > 1) {
|
||||
console.log('Context length exceeded, trying again with shorter context.');
|
||||
return await this.sendRequest(turns.slice(1), systemMessage);
|
||||
} else {
|
||||
console.log(err);
|
||||
res = 'My brain disconnected, try again.';
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// If the model name includes "deepseek-r1" or "Andy-3.5-reasoning", then handle the <think> block.
|
||||
const hasOpenTag = res.includes("<think>");
|
||||
const hasCloseTag = res.includes("</think>");
|
||||
|
||||
// If there's a partial mismatch, retry to get a complete response.
|
||||
if ((hasOpenTag && !hasCloseTag)) {
|
||||
console.warn("Partial <think> block detected. Re-generating...");
|
||||
continue;
|
||||
}
|
||||
|
||||
// If </think> is present but <think> is not, prepend <think>
|
||||
if (hasCloseTag && !hasOpenTag) {
|
||||
res = '<think>' + res;
|
||||
}
|
||||
// Changed this so if the model reasons, using <think> and </think> but doesn't start the message with <think>, <think> ges prepended to the message so no error occur.
|
||||
|
||||
// If both tags appear, remove them (and everything inside).
|
||||
if (hasOpenTag && hasCloseTag) {
|
||||
res = res.replace(/<think>[\s\S]*?<\/think>/g, '');
|
||||
}
|
||||
|
||||
finalRes = res;
|
||||
break; // Exit the loop if we got a valid response.
|
||||
}
|
||||
return res;
|
||||
|
||||
if (finalRes == null) {
|
||||
console.warn("Could not get a valid <think> block or normal response after max attempts.");
|
||||
finalRes = 'I thought too hard, sorry, try again.';
|
||||
}
|
||||
return finalRes;
|
||||
}
|
||||
|
||||
async embed(text) {
|
||||
let model = this.model_name || 'nomic-embed-text';
|
||||
let body = {model: model, prompt: text};
|
||||
let body = { model: model, input: text };
|
||||
let res = await this.send(this.embedding_endpoint, body);
|
||||
return res['embedding']
|
||||
return res['embedding'];
|
||||
}
|
||||
|
||||
async send(endpoint, body) {
|
||||
const url = new URL(endpoint, this.url);
|
||||
let method = 'POST';
|
||||
let headers = new Headers();
|
||||
const request = new Request(url, {method, headers, body: JSON.stringify(body)});
|
||||
const request = new Request(url, { method, headers, body: JSON.stringify(body) });
|
||||
let data = null;
|
||||
try {
|
||||
const res = await fetch(request);
|
||||
|
@ -63,4 +104,4 @@ export class Local {
|
|||
}
|
||||
return data;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -47,6 +47,7 @@ export class Mistral {
|
|||
];
|
||||
messages.push(...strictFormat(turns));
|
||||
|
||||
console.log('Awaiting mistral api response...')
|
||||
const response = await this.#client.chat.complete({
|
||||
model,
|
||||
messages,
|
||||
|
@ -55,14 +56,33 @@ export class Mistral {
|
|||
|
||||
result = response.choices[0].message.content;
|
||||
} catch (err) {
|
||||
console.log(err)
|
||||
|
||||
result = "My brain disconnected, try again.";
|
||||
if (err.message.includes("A request containing images has been given to a model which does not have the 'vision' capability.")) {
|
||||
result = "Vision is only supported by certain models.";
|
||||
} else {
|
||||
result = "My brain disconnected, try again.";
|
||||
}
|
||||
console.log(err);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
async sendVisionRequest(messages, systemMessage, imageBuffer) {
|
||||
const imageMessages = [...messages];
|
||||
imageMessages.push({
|
||||
role: "user",
|
||||
content: [
|
||||
{ type: "text", text: systemMessage },
|
||||
{
|
||||
type: "image_url",
|
||||
imageUrl: `data:image/jpeg;base64,${imageBuffer.toString('base64')}`
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
return this.sendRequest(imageMessages, systemMessage);
|
||||
}
|
||||
|
||||
async embed(text) {
|
||||
const embedding = await this.#client.embeddings.create({
|
||||
model: "mistral-embed",
|
||||
|
|
|
@ -18,6 +18,8 @@ import { HuggingFace } from './huggingface.js';
|
|||
import { Qwen } from "./qwen.js";
|
||||
import { Grok } from "./grok.js";
|
||||
import { DeepSeek } from './deepseek.js';
|
||||
import { Hyperbolic } from './hyperbolic.js';
|
||||
import { GLHF } from './glhf.js';
|
||||
import { OpenRouter } from './openrouter.js';
|
||||
import { VLLM } from './vllm.js';
|
||||
import { promises as fs } from 'fs';
|
||||
|
@ -47,7 +49,6 @@ export class Prompter {
|
|||
}
|
||||
// base overrides default, individual overrides base
|
||||
|
||||
|
||||
this.convo_examples = null;
|
||||
this.coding_examples = null;
|
||||
|
||||
|
@ -72,6 +73,14 @@ export class Prompter {
|
|||
this.code_model = this.chat_model;
|
||||
}
|
||||
|
||||
if (this.profile.vision_model) {
|
||||
let vision_model_profile = this._selectAPI(this.profile.vision_model);
|
||||
this.vision_model = this._createModel(vision_model_profile);
|
||||
}
|
||||
else {
|
||||
this.vision_model = this.chat_model;
|
||||
}
|
||||
|
||||
let embedding = this.profile.embedding;
|
||||
if (embedding === undefined) {
|
||||
if (chat_model_profile.api !== 'ollama')
|
||||
|
@ -127,10 +136,12 @@ export class Prompter {
|
|||
profile = {model: profile};
|
||||
}
|
||||
if (!profile.api) {
|
||||
if (profile.model.includes('gemini'))
|
||||
if (profile.model.includes('openrouter/'))
|
||||
profile.api = 'openrouter'; // must do first because shares names with other models
|
||||
else if (profile.model.includes('ollama/'))
|
||||
profile.api = 'ollama'; // also must do early because shares names with other models
|
||||
else if (profile.model.includes('gemini'))
|
||||
profile.api = 'google';
|
||||
else if (profile.model.includes('openrouter/'))
|
||||
profile.api = 'openrouter'; // must do before others bc shares model names
|
||||
else if (profile.model.includes('vllm/'))
|
||||
profile.api = 'vllm';
|
||||
else if (profile.model.includes('gpt') || profile.model.includes('o1')|| profile.model.includes('o3'))
|
||||
|
@ -145,6 +156,10 @@ export class Prompter {
|
|||
model_profile.api = 'mistral';
|
||||
else if (profile.model.includes("groq/") || profile.model.includes("groqcloud/"))
|
||||
profile.api = 'groq';
|
||||
else if (profile.model.includes("glhf/"))
|
||||
profile.api = 'glhf';
|
||||
else if (profile.model.includes("hyperbolic/"))
|
||||
profile.api = 'hyperbolic';
|
||||
else if (profile.model.includes('novita/'))
|
||||
profile.api = 'novita';
|
||||
else if (profile.model.includes('qwen'))
|
||||
|
@ -152,17 +167,14 @@ export class Prompter {
|
|||
else if (profile.model.includes('grok'))
|
||||
profile.api = 'xai';
|
||||
else if (profile.model.includes('deepseek'))
|
||||
profile.api = 'deepseek';
|
||||
else if (profile.model.includes('mistral'))
|
||||
profile.api = 'deepseek';
|
||||
else if (profile.model.includes('mistral'))
|
||||
profile.api = 'mistral';
|
||||
else if (profile.model.includes('llama3'))
|
||||
profile.api = 'ollama';
|
||||
else
|
||||
throw new Error('Unknown model:', profile.model);
|
||||
}
|
||||
return profile;
|
||||
}
|
||||
|
||||
_createModel(profile) {
|
||||
let model = null;
|
||||
if (profile.api === 'google')
|
||||
|
@ -174,13 +186,17 @@ export class Prompter {
|
|||
else if (profile.api === 'replicate')
|
||||
model = new ReplicateAPI(profile.model.replace('replicate/', ''), profile.url, profile.params);
|
||||
else if (profile.api === 'ollama')
|
||||
model = new Local(profile.model, profile.url, profile.params);
|
||||
model = new Local(profile.model.replace('ollama/', ''), profile.url, profile.params);
|
||||
else if (profile.api === 'mistral')
|
||||
model = new Mistral(profile.model, profile.url, profile.params);
|
||||
else if (profile.api === 'groq')
|
||||
model = new GroqCloudAPI(profile.model.replace('groq/', '').replace('groqcloud/', ''), profile.url, profile.params);
|
||||
else if (profile.api === 'huggingface')
|
||||
model = new HuggingFace(profile.model, profile.url, profile.params);
|
||||
else if (profile.api === 'glhf')
|
||||
model = new GLHF(profile.model.replace('glhf/', ''), profile.url, profile.params);
|
||||
else if (profile.api === 'hyperbolic')
|
||||
model = new Hyperbolic(profile.model.replace('hyperbolic/', ''), profile.url, profile.params);
|
||||
else if (profile.api === 'novita')
|
||||
model = new Novita(profile.model.replace('novita/', ''), profile.url, profile.params);
|
||||
else if (profile.api === 'qwen')
|
||||
|
@ -197,7 +213,6 @@ export class Prompter {
|
|||
throw new Error('Unknown API:', profile.api);
|
||||
return model;
|
||||
}
|
||||
|
||||
getName() {
|
||||
return this.profile.name;
|
||||
}
|
||||
|
@ -397,6 +412,13 @@ export class Prompter {
|
|||
return res.trim().toLowerCase() === 'respond';
|
||||
}
|
||||
|
||||
async promptVision(messages, imageBuffer) {
|
||||
await this.checkCooldown();
|
||||
let prompt = this.profile.image_analysis;
|
||||
prompt = await this.replaceStrings(prompt, messages, null, null, null);
|
||||
return await this.vision_model.sendVisionRequest(messages, prompt, imageBuffer);
|
||||
}
|
||||
|
||||
async promptGoalSetting(messages, last_goals) {
|
||||
// deprecated
|
||||
let system_message = this.profile.goal_setting;
|
||||
|
|
|
@ -205,6 +205,13 @@ export function getItemCraftingRecipes(itemName) {
|
|||
{craftedCount : r.result.count}
|
||||
]);
|
||||
}
|
||||
// sort recipes by if their ingredients include common items
|
||||
const commonItems = ['oak_planks', 'oak_log', 'coal', 'cobblestone'];
|
||||
recipes.sort((a, b) => {
|
||||
let commonCountA = Object.keys(a[0]).filter(key => commonItems.includes(key)).reduce((acc, key) => acc + a[0][key], 0);
|
||||
let commonCountB = Object.keys(b[0]).filter(key => commonItems.includes(key)).reduce((acc, key) => acc + b[0][key], 0);
|
||||
return commonCountB - commonCountA;
|
||||
});
|
||||
|
||||
return recipes;
|
||||
}
|
||||
|
@ -403,7 +410,7 @@ export function getDetailedCraftingPlan(targetItem, count = 1, current_inventory
|
|||
const inventory = { ...current_inventory };
|
||||
const leftovers = {};
|
||||
const plan = craftItem(targetItem, count, inventory, leftovers);
|
||||
return formatPlan(plan);
|
||||
return formatPlan(targetItem, plan);
|
||||
}
|
||||
|
||||
function isBaseItem(item) {
|
||||
|
@ -469,7 +476,7 @@ function craftItem(item, count, inventory, leftovers, crafted = { required: {},
|
|||
return crafted;
|
||||
}
|
||||
|
||||
function formatPlan({ required, steps, leftovers }) {
|
||||
function formatPlan(targetItem, { required, steps, leftovers }) {
|
||||
const lines = [];
|
||||
|
||||
if (Object.keys(required).length > 0) {
|
||||
|
@ -485,6 +492,10 @@ function formatPlan({ required, steps, leftovers }) {
|
|||
lines.push('');
|
||||
lines.push(...steps);
|
||||
|
||||
if (Object.keys(required).some(item => item.includes('oak')) && !targetItem.includes('oak')) {
|
||||
lines.push('Note: Any varient of wood can be used for this recipe.');
|
||||
}
|
||||
|
||||
if (Object.keys(leftovers).length > 0) {
|
||||
lines.push('\nYou will have leftover:');
|
||||
Object.entries(leftovers).forEach(([item, count]) =>
|
||||
|
|
|
@ -46,7 +46,9 @@ export function strictFormat(turns) {
|
|||
let messages = [];
|
||||
let filler = {role: 'user', content: '_'};
|
||||
for (let msg of turns) {
|
||||
msg.content = msg.content.trim();
|
||||
if (typeof msg.content === 'string') {
|
||||
msg.content = msg.content.trim();
|
||||
}
|
||||
if (msg.role === 'system') {
|
||||
msg.role = 'user';
|
||||
msg.content = 'SYSTEM: ' + msg.content;
|
||||
|
|
Loading…
Add table
Reference in a new issue