Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add HTML viz for side-by-side comparing LLM output #4

Merged
merged 1 commit into from
Feb 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -130,4 +130,6 @@ dist
.pnp.*
playground
package-lock.json
*.key
*.key

test/*.html
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ const { ChatSession, CompletionService } = require('langxlang')
```js
const service = new CompletionService({ openai: [key], gemini: [key] })
const response = await service.requestCompletion('gpt-3.5-turbo-16k', /* empty system prompt */, 'Tell me about yourself')
console.log(response.text)
```

#### Chatting with a model
Expand Down
14 changes: 11 additions & 3 deletions src/index.d.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
declare module 'langxlang' {
type Model = 'gpt-3.5-turbo-16k' | 'gpt-3.5-turbo' | 'gpt-4' | 'gpt-4-turbo-preview' | 'gemini-1.0-pro'
class CompletionService {
// Creates an instance of completion service.
// Note: as an alternative to explicitly passing the API keys in the constructor you can:
Expand All @@ -10,14 +11,21 @@ declare module 'langxlang' {
cachePath: string

// Request a non-streaming completion from the model.
requestCompletion(model: string, systemPrompt: string, userPrompt: string): Promise<string>
requestCompletion(model: Model, systemPrompt: string, userPrompt: string): Promise<{ text: string }>
}

class ChatSession {
// ChatSession is for back and forth conversation between a user an an LLM.
constructor(completionService: CompletionService, model: string, systemPrompt: string)
constructor(completionService: CompletionService, model: Model, systemPrompt: string)
// Send a message to the LLM and receive a response as return value. The chunkCallback
// can be defined to listen to bits of the message stream as it's being written by the LLM.
sendMessage(message: string, chunkCallback: ({ content: string }) => void): Promise<string>
sendMessage(message: Model, chunkCallback: ({ content: string }) => void): Promise<string>
}

interface Tools {
// Generate HTML that shows side-by-side outputs for the system/user prompt across different models.
makeVizForPrompt(systemPrompt: string, userPrompt: string, models: Model[]): Promise<string>
}

const tools: Tools
}
3 changes: 2 additions & 1 deletion src/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,6 @@ module.exports = {
ChatSession,
openai,
palm2,
gemini
gemini,
tools: require('./tools')
}
111 changes: 111 additions & 0 deletions src/tools.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
const { CompletionService } = require('./CompletionService')

function makeVizHtml (data) {
return `
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LLM Output Viz</title>
<style>
pre {
max-width: 25vw;
overflow: auto;
}
h3 {
background-color: lightcyan;
margin-top: 12px;
}
#grid {
/* margin: 1%; */
.correct h3 {
background-color: lightgreen;
}
}
#grid div {
margin-right: 2%;
}
</style>
</head>
<body>
<button id="wordwrap">Toggle Word Wrap</button>
<div id="grid" style='display: flex; flex-direction: row;'>
<div>
<h3>System Prompt</h3>
<pre id="psys">SYS PROMPT</pre>
</div>
<div>
<h3>User Prompt</h3>
<pre id="pusr">USR PROMPT</pre>
</div>
${
data.models.map(([modelName, modelId]) =>
`<div><h3>${modelName}</h3><pre id="presp${modelId}">MODEL OUTPUT</pre></div>`).join('\n')
}
</div>

<script>
function toggleWordWrap () {
const $pre = document.querySelectorAll('pre');
for (const $p of $pre) {
$p.style.whiteSpace = $p.style.whiteSpace === 'pre-wrap' ? 'pre' : 'pre-wrap';
}
}
wordwrap.onclick = toggleWordWrap;
toggleWordWrap();

const $psys = document.getElementById('psys');
const $pusr = document.getElementById('pusr');

const data = ${JSON.stringify(data)};
const outputs = data.outputs;
if ($psys) $psys.textContent = data.system;
if ($pusr) $pusr.textContent = data.user;
for (const [modelName, modelId] of data.models) {
const $presp = document.getElementById('presp' + modelId);
if ($presp) $presp.textContent = outputs[modelId];
}
</script>
</body>
</html>
`
}

async function makeVizForPrompt (system, user, models) {
const service = new CompletionService()
const data = { models: [], outputs: {} }
for (const model of models) {
const { text } = await service.requestCompletion(model, system, user)
switch (model) {
case 'gpt-3.5-turbo-16k':
data.models.push(['GPT-3.5 Turbo 16k', '3516turbo'])
data.outputs['3516turbo'] = text
break
case 'gpt-3.5-turbo':
data.models.push(['GPT-3.5 Turbo', '35turbo'])
data.outputs['35turbo'] = text
break
case 'gpt-4':
data.models.push(['GPT-4', 'gpt4'])
data.outputs.gpt4 = text
break
case 'gpt-4-turbo-preview':
data.models.push(['GPT-4 Turbo Preview', 'gpt4turbo'])
data.outputs.gpt4turbo = text
break
case 'gemini-1.0-pro':
data.models.push(['Gemini 1.0 Pro', 'gemini'])
data.outputs.gemini = text
break
default:
data.models.push([model, model])
data.outputs[model] = text
}
}
data.system = system
data.user = user
return makeVizHtml(data)
}

module.exports = { makeVizForPrompt }
10 changes: 10 additions & 0 deletions test/viz.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
const { tools } = require('langxlang')
const fs = require('fs')
const path = require('path')

async function main () {
// const all = ['gpt-3.5-turbo-16k', 'gpt-3.5-turbo', 'gpt-4', 'gpt-4-turbo-preview', 'gemini-1.0-pro']
const viz = await tools.makeVizForPrompt('', 'Why is the sky blue?', ['gpt-3.5-turbo'])
fs.writeFileSync(path.join(__dirname, 'viz.html'), viz)
}
main()
Loading