Embedding TypeScript
Hako is all you need
By no stretch of the imagination is the idea of extending compiled programs with interpreted languages at runtime novel. Developers have been doing it for years; Lua comes to mind for extending games with mods and plugins.
I’m very opinionated in my belief that the choice of language you use for extending your software directly correlates to your software’s ability to build a healthy ecosystem of third-party and community-made extensions. JavaScript (and by extension TypeScript) seems like the obvious choice, so why don’t we see more native apps using it?
The state of platform-specific bindings for JavaScript engines is dreadful. This isn’t a slight against the developers who endeavor to create them; they’re trying to build bindings for behemoth projects with no guarantees of a stable API or even ABI, where breaking changes are frequent and the maintenance burden is incredibly high. Go searching for bindings for V8 and JavaScriptCore and you’ll be met with repos that haven’t been touched in years.
Just look at this patch I made to add ES module support to JavaScriptCore’s C API if you want a sense of how many breaking changes you have to wrangle just to use these projects in an embeddable manner.
But “Andrew!” I hear you screaming, “you can just use QuickJS!” And you’re absolutely right. While the state of platform-specific bindings for QuickJS is no better than the larger projects, its considerably smaller API surface makes it more maintainable, and the fact it was designed to be embeddable makes it the best candidate by far.
But here lies the second issue of choosing the right extension language: security. QuickJS, while an amazing feat of engineering, is not exempt from security issues. Remember, extensions by their very nature are untrusted code running in an environment you don’t control. If your process is running with elevated privileges and someone crafts JavaScript in just the right way to exploit your users’ machines, that’s on you.
So what are we to do?
Hako
Hako1 is a JavaScript engine built on top of QuickJS that compiles down to WebAssembly, a memory-safe, sandboxed execution environment. This means even though Hako is written in C, programs embedding it have an extra layer of protection from potential memory vulnerabilities. What would normally be exploitable security holes become denial of service attacks at worst. When your extension crashes, it crashes the sandbox; not your process2.
Beyond WebAssembly’s baseline protections, Hako provides its own sandboxing mechanisms. You can restrict JavaScript’s capabilities at a granular level: disable memory allocation entirely, remove specific language features, or lock down what the execution context can access. This matters more now that we’re in an era where AI agents are running arbitrary code on people’s machines with minimal supervision, and developers are shipping code produced by large language models that occasionally hallucinate entire APIs or dangerous commands.
Building the Translation Layer
QuickJS is great, but calling it from other languages can be a pain. Hako acts as a translation layer that sits between QuickJS and the host application and makes every function call explicit about memory ownership. Instead of figuring out who owns what, the type signatures tell you everything:
//! Creates a new JavaScript string value
//! @param ctx Context
//! @param str C string. Host owns.
//! @return New string value. Caller owns
//! free with HAKO_FreeValuePointer.
HAKO_EXPORT(”HAKO_NewString”) extern JSValue* HAKO_NewString(JSContext* ctx, const char* str);You pass in a string (which you own), you get back a value (which you now own and need to free). No ambiguity. Of course, unless you’re implementing a Hako host, you won’t need to worry about this because a host wraps everything into the natural memory management primitives of your language.
Parsing the Bindings
It has been said by many that you cannot parse C header files. Unfortunately, I am not among the many, and have decided to do it with RegEx. For generating host bindings, syntactic analysis is good enough; I don’t need a full C compiler, just the function signatures and their docs.
The parsing works because we control the format:
function parseHeaderFunction(lines: string[], exportLineIndex: number): HeaderFunctionInfo | null {
const exportLine = lines[exportLineIndex];
const exportMatch = exportLine.match(/HAKO_EXPORT\(”([^”]+)”\)/);
if (!exportMatch) return null;
const name = exportMatch[1];
const fullRegex = /HAKO_EXPORT\(”[^”]+”\)\s+extern\s+([\w\s*]+?)\s+(\w+)\s*\(([^)]*)\);/;
const match = exportLine.match(fullRegex);
if (!match) return null;
const cReturnType = match[1].trim();
const paramsStr = match[3];
// Parse parameter types and extract docs from comments...
}We also use wasm-objdump to extract the actual WebAssembly function signatures from the compiled module. Combine the C header info (types and ownership) with the WebAssembly signatures (actual function layout) and you get a complete binding specification:
{
“name”: “HAKO_NewString”,
“funcIndex”: 62,
“wasmSignature”: {
“params”: [”i32”, “i32”],
“returns”: “i32”
},
“cReturnType”: “JSValue*”,
“cParams”: [
{
“name”: “ctx”,
“cType”: “JSContext*”,
“doc”: “Context”
},
{
“name”: “str”,
“cType”: “const char*”,
“doc”: “C string. Host owns.”
}
],
“summary”: “Creates a new JavaScript string value”
}Generating Host Bindings
With the binding spec, we can generate host-specific code. For .NET, Hako abstracts the WebAssembly runtime so we only need to target a simple interface. The generated code looks like this:
/// <summary>Creates a new JavaScript string value</summary>
/// <param name=”ctx”>Context</param>
/// <param name=”str”>C string. Host owns.</param>
public JSValuePointer NewString(JSContextPointer ctx, int str)
{
return Hako.Dispatcher.Invoke(() =>
{
if (_newString == null)
throw new InvalidOperationException(”HAKO_NewString not available”);
return _newString(ctx, str);
});
}All of this makes adding new functions to Hako easy: update the C header, recompile the WASM, rerun codegen. The bindings regenerate automatically with docs and all.
TypeScript for Everyone
TypeScript’s type system is great at defining contracts. It gives both developers and AI models the context they need to write better code. The problem is you can’t just pass this to a JavaScript engine:
function greet(name: string): void {
console.log(`Hello ${name}`);
}The type annotations aren’t valid JavaScript. The engine will throw a syntax error.
So you have a few options. The first is using a bundler. Either your users bundle their code themselves (extra build step), or we integrate a bundler directly into Hako.
I went down this path initially, experimenting with swc, esbuild, and even rollup. Eventually I abandoned all of these approaches for one simple reason: Hako is not trying to be Bun or Deno for embedded use cases. Needing tsconfig resolution, module bundling, and all the other complexities that come with being a bundler is way out of scope.
The approach we went with is simpler: just strip the types out before evaluating. This is similar to what ts-blank-space does, except instead of relying on the TypeScript compiler, we use tree-sitter and its abstract syntax tree to essentially reimplement type stripping entirely in C.
I was initially worried about performance. Tree-sitter builds an AST and traverses it recursively, which seemed like it might be slow when compiled to WebAssembly. Turns out once Wasmtime’s JIT kicks in, it runs about as fast as swc (around 0.01-0.05ms for typical files).
Building the Type Stripper
The type stripper’s API is straightforward. You create a context, call it to strip types, and it returns JavaScript:
typedef enum {
TS_STRIP_SUCCESS = 0,
TS_STRIP_ERROR_INVALID_INPUT,
TS_STRIP_ERROR_PARSE_FAILED,
TS_STRIP_ERROR_UNSUPPORTED,
TS_STRIP_ERROR_OUT_OF_MEMORY
} ts_strip_result_t;
ts_strip_ctx_t* ts_strip_ctx_new(void);
ts_strip_result_t ts_strip_with_ctx(
ts_strip_ctx_t* ctx,
const char* typescript_source,
char** javascript_out,
size_t* javascript_len
);
void ts_strip_ctx_delete(ts_strip_ctx_t* ctx);The implementation walks tree-sitter’s AST and blanks out type-only syntax. Instead of removing type annotations entirely (which would break source maps and line numbers), we replace them with spaces:
static inline bool blank_range(parse_ctx_t* ctx, uint32_t start, uint32_t end) {
return range_array_push(&ctx->ranges, ctx->allocator, FLAG_BLANK, start, end);
}
static bool blank_type_anno(parse_ctx_t* ctx, TSNode n) {
uint32_t start = ts_node_start_byte(n);
uint32_t end = ts_node_end_byte(n);
if (start > 0 && ctx->source[start - 1] == ‘:’) {
start--;
}
return blank_range(ctx, start, end);
}So this TypeScript:
let x: string = ‘hello’;Becomes this JavaScript:
let x = ‘hello’;Same byte length, same line numbers, just with the type annotation replaced by spaces.
The visitor pattern (which you should all know I’m a fan of) handles different node types:
static int visit_node(parse_ctx_t* ctx, TSNode n) {
const char* type = ts_node_type(n);
// Type-only declarations get blanked entirely
if (strcmp(type, “type_alias_declaration”) == 0 ||
strcmp(type, “interface_declaration”) == 0) {
blank_stmt(ctx, n);
return VISIT_BLANKED;
}
// Variable declarations need selective blanking
if (strcmp(type, “variable_declarator”) == 0) {
uint32_t count = ts_node_child_count(n);
for (uint32_t i = 0; i < count; i++) {
TSNode child = ts_node_child(n, i);
const char* child_type = ts_node_type(child);
if (strcmp(child_type, “type_annotation”) == 0) {
blank_type_anno(ctx, child);
} else {
visit_node(ctx, child);
}
}
return VISITED_JS;
}
// Default: visit children
visit_children(ctx, n);
return VISITED_JS;
}Some TypeScript features can’t be stripped because they have runtime semantics. Enums, parameter properties, and the old namespace syntax all generate JavaScript code. When we encounter these, we return TS_STRIP_ERROR_UNSUPPORTED:
// Enum declaration
if (strcmp(type, “enum_declaration”) == 0) {
ctx->has_unsupported = true;
return VISITED_JS;
}
// Parameter properties in constructors
static bool has_param_props(TSNode params) {
uint32_t count = ts_node_child_count(params);
for (uint32_t i = 0; i < count; i++) {
TSNode param = ts_node_child(params, i);
// Check for accessibility modifiers
if (find_child_type(param, “accessibility_modifier”)) {
return true;
}
}
return false;
}Integrating with Hako
To avoid reallocating tree-sitter’s parser on every evaluation, we store the stripper context as opaque data on the QuickJS runtime:
typedef struct {
ts_strip_ctx_t* stripper;
// ... other runtime data
} hako_runtime_data_t;
HAKO_Status HAKO_InitTypeStripper(JSRuntime* rt) {
hako_runtime_data_t* data = JS_GetRuntimeOpaque(rt);
if (!data) {
return HAKO_STATUS_ERROR_INVALID_ARGS;
}
data->stripper = ts_strip_ctx_new_with_allocator(&hako_allocator);
if (!data->stripper) {
return HAKO_STATUS_ERROR_OUT_OF_MEMORY;
}
return HAKO_STATUS_SUCCESS;
}The actual stripping happens in HAKO_StripTypes:
HAKO_Status HAKO_StripTypes(
JSRuntime* rt,
const char* typescript_source,
char** javascript_out,
size_t* javascript_len
) {
hako_runtime_data_t* data = JS_GetRuntimeOpaque(rt);
if (!data || !data->stripper) {
return HAKO_STATUS_ERROR_INVALID_ARGS;
}
ts_strip_result_t result = ts_strip_with_ctx(
data->stripper,
typescript_source,
javascript_out,
javascript_len
);
switch (result) {
case TS_STRIP_SUCCESS:
return HAKO_STATUS_SUCCESS;
case TS_STRIP_ERROR_PARSE_FAILED:
return HAKO_STATUS_ERROR_PARSE_FAILED;
case TS_STRIP_ERROR_UNSUPPORTED:
return HAKO_STATUS_ERROR_UNSUPPORTED;
default:
return HAKO_STATUS_ERROR_OUT_OF_MEMORY;
}
}HAKO_Eval automatically detects TypeScript by checking the filename extension or the eval flags. If it ends in .ts, it strips types before evaluating:
using var runtime = Hako.Initialize<WasmtimeEngine>();
using var realm = runtime.CreateRealm().WithGlobals(g => g.WithConsole());
var result = await realm.EvalAsync<int>(@”
interface User {
name: string;
age: number;
}
function greet(user: User): string {
return `${user.name} is ${user.age} years old`;
}
const alice: User = { name: ‘Alice’, age: 30 };
console.log(greet(alice));
alice.age + 12;
“, new() { StripTypes = true });
Console.WriteLine($”Result: {result}”);You can also strip types manually if you need more control:
var typescript = @”
type Operation = ‘add’ | ‘multiply’;
const calculate = (a: number, b: number, op: Operation): number => {
return op === ‘add’ ? a + b : a * b;
};
calculate(5, 3, ‘multiply’);
“;
var javascript = runtime.StripTypes(typescript);
var calcResult = await realm.EvalAsync<int>(javascript);
Console.WriteLine($”Calculation: {calcResult}”);The whole thing is surprisingly fast. For a typical TypeScript file (a few hundred lines with interfaces, types, and generics), stripping takes about 0.02ms. That’s basically free compared to the actual evaluation time.
Performance
I’ll spare you the technical deconstruction. If you want to see everything that goes into a complete host implementation with all the sugar to make it feel like a natural extension of the host language, check out the .NET implementation on GitHub.
The more important question is: how fast is all this?
Rather than share synthetic benchmarks, I’m going to show two real examples.
Raylib 3D Visualization
First example is Hako interfacing with raylib to drive a 3D visualization. The TypeScript code is driving realtime 3D graphics through FFI calls back to C#, which then calls into native raylib. The type definitions are automatically generated from the C# module:
declare module ‘raylib’ {
export class Vector3 {
constructor(x?: number, y?: number, z?: number);
x: number;
y: number;
z: number;
}
export class Camera3D {
constructor();
position: Vector3;
target: Vector3;
up: Vector3;
fovy: number;
projection: number;
}
export function beginMode3D(camera: Camera3D): void;
export function endMode3D(): void;
export function drawCube(position: Vector3, width: number, height: number, length: number, color: Color): void;
export function drawCubeWires(position: Vector3, width: number, height: number, length: number, color: Color): void;
// ... etc
}The demo renders over 100 animated objects at 60fps, with each frame making hundreds of FFI calls.
Here’s a snippet of the TypeScript code:
const objects: SceneObject[] = [];
// Generate a grid of objects
const gridSize = 5;
for (let x = -gridSize; x <= gridSize; x++) {
for (let z = -gridSize; z <= gridSize; z++) {
const dist = Math.sqrt(x * x + z * z);
let type: ‘cube’ | ‘tower’ | ‘ring’ = ‘cube’;
if (dist < 2) type = ‘tower’;
else if (dist > 6 && Math.abs(x) % 2 === 0) type = ‘ring’;
objects.push({
pos: new Vector3(x * 3.5, 0, z * 3.5),
type: type,
offset: (x + z) * 0.5 + dist * 0.3
});
}
}
// Main loop
while (!windowShouldClose()) {
time += 0.016;
// Animate camera
camera.position = new Vector3(
Math.cos(angle) * radius,
height,
Math.sin(angle) * radius
);
beginDrawing();
clearBackground(bgColor);
beginMode3D(camera);
// Draw all objects with animated heights
for (let i = 0; i < objects.length; i++) {
const obj = objects[i];
const height = 2 + Math.sin(time * 2 - dist * 0.4 + obj.offset) * 2;
drawCube(new Vector3(obj.pos.x, height / 2, obj.pos.z),
1.5, height, 1.5, color);
drawCubeWires(new Vector3(obj.pos.x, height / 2, obj.pos.z),
1.5, height, 1.5, wireColor);
}
endMode3D();
endDrawing();
}The C# side uses source generators to automatically create the module bindings:
[JSModule(Name = “raylib”)]
[JSModuleClass(ClassType = typeof(Vector3), ExportName = “Vector3”)]
[JSModuleClass(ClassType = typeof(Camera3D), ExportName = “Camera3D”)]
internal partial class RaylibModule
{
[JSModuleMethod(Name = “drawCube”)]
public static void DrawCube(Vector3 position, double width, double height, double length, Color color)
{
var nativePos = new System.Numerics.Vector3((float)position.X, (float)position.Y, (float)position.Z);
var nativeColor = new Raylib_cs.Color(color.R, color.G, color.B, color.A);
Program.RunOnMainThread(() => Raylib.DrawCube(nativePos, (float)width, (float)height, (float)length, nativeColor));
}
// ... more methods
}iOS Finance App
The second example is a finance tracking app running on iOS. The entire UI is written in JavaScript using a UI framework I created, and the rendering backend is written in C. Hako (compiled to WASM, no JIT) sits in the middle.
The app has smooth animations, layout calculations, SVG rendering, handles complex state management, and responds instantly to user interaction (including immediate startup):
Both examples show the same thing: Hako’s overhead is low enough that you can build real applications with it. Not toy demos, actual software people would use. The FFI boundary is fast, the interpreter is fast, and the whole thing just works - and it’s portable to any platform; and now it will make its way into much of my software.
Closing
That’s pretty much it. Hako is a JavaScript engine you can embed in your applications, with TypeScript support, WebAssembly sandboxing, and a clean FFI layer. It works on .NET, iOS, and anywhere else you can run WebAssembly.
To try it out or see the code behind everything, head over to https://github.com/6over3/hako
Thanks for reading.
Hako is a WebAssembly Reactor. I believe WASI Preview 1 Reactor represents the closest thing we’ll ever get to a truly universal FFI. The WebAssembly Component Model and WIT have their place, but the developer experience leaves much to be desired. A WASI reactor is simply a WebAssembly module that operates continuously and can be called multiple times to react to events or requests. More-or-less a portable shared library.
If you try-catch!

