What an execution platform really is
Beyond the calculation engine: shared infrastructure to describe, run, reuse, and integrate models without rebuilding them every time.
Subspace
When we talk about analytical models, projections, or simulations, many tools focus on just one slice of the problem.
Some help prepare data.
Some help build an interface.
Others let you run a precise calculation in a specific context.
But in practice, the real need is often broader.
A useful model must be able to:
- be described clearly
- execute coherently
- be reused in several contexts
- integrate with other systems
- be edited without starting from scratch
- be used by different roles as appropriate
That is exactly where an execution platform comes in.
An execution platform is not just access to a calculation engine
An execution engine on its own is not enough.
It can be powerful, fast, even very efficient. But if everything you need around it to run it still has to be rebuilt each time, total cost stays high.
An execution platform goes further.
It provides a common base to:
- describe models
- run them
- reach them in more than one way
- keep the same logic from one context to another
In other words, it is not only about the calculation.
It is also about how that calculation becomes actually usable.
Why that changes a lot
Without that built-in coherence, the same need often gets recreated several times:
- in a file
- in a script
- in an interface
- in a backend
- in integration logic
Each time, complexity piles up.
With an execution platform, the goal is different:
describe the logic once, then use it in multiple contexts without redeveloping the foundation every time.
That shift is what changes the economics.
What Subspace brings
Subspace is built on that idea.
The point is not only to deliver a faster calculation.
The point is to deliver execution infrastructure for models and analytical projections.
Infrastructure you can consume as a service.
That infrastructure can then be used:
- in the portal
- via the API
- with an SDK
- inside a product
- in an internal workflow
What matters is not only that there are several ways to access it.
What matters is that those paths share the same execution logic.
What that enables
When one base serves many uses, several benefits appear:
- less re-development
- less duplication
- more consistency
- more reuse
- clearer visibility into what actually ran
- easier model evolution
So the upside is not only speed.
It is also that execution stops being a foundation you keep rebuilding.
From the portal to the executable model
The images here show the same model from several angles in the Subspace portal.
You see different views of the product, but they are not separate logics.
Those views are different ways to interact with one execution.
The portal exposes the same model you can then use from a Python script, a TypeScript application, or the HTTP API.
Schema view: dependencies between variables
Aggregated results after a run
Distributions
Model and API tab with JSON response
The screenshots above match the Capital projection model: 1000 scenarios, 12 steps, a per-scenario uniform rate, then derived variables such as capital and profit.
The JSON below is the equivalent SP Model.
{
"steps": 12,
"scenarios": 1000,
"variables": [
{
"per": "scenario",
"dist": "uniform",
"name": "taux",
"params": {
"max": 0.07,
"min": 0.03
}
},
{
"init": 1000,
"name": "capital",
"formula": "capital[t-1] * (1 + taux)"
},
{
"init": 0,
"name": "profit",
"formula": "capital[t] - capital[0]"
}
]
}This point is central.
This JSON is not a secondary or decorative view of the model.
It is the model description itself.
That is the basis Subspace’s calculation engine uses to interpret what to run, no matter which access mode you use.
One logic, several entry points
Once the model is described, it can be run in different ways depending on context.
From the portal, you visualize it and launch it in a UI.
From Python, you can embed it in an analytical script.
From TypeScript, you can use it in a client-facing web product or an internal admin application.
Over HTTP, you can call it from any system that can send a request.
The snippets below are not several different implementations.
They show the same execution logic, exposed through different entry points.
Example: Python (SDK)
from subspacecomputing import BSCE
client = BSCE(api_key="be_live_...", team_id="optional-team-uuid")
spec = {
"scenarios": 1000,
"steps": 12,
"variables": [
{
"name": "taux",
"dist": "uniform",
"params": {"min": 0.03, "max": 0.07},
"per": "scenario",
},
{
"name": "capital",
"init": 1000.0,
"formula": "capital[t-1] * (1 + taux)",
},
{
"name": "profit",
"init": 0.0,
"formula": "capital[t] - capital[0]",
},
],
}
result = client.simulate(spec)
print(result["last_mean"]["capital"], result["last_mean"]["profit"])Example: TypeScript (SDK)
import { BSCE } from '@beausoft/subspace';
const client = new BSCE('be_live_...', undefined, 'optional-team-uuid');
const spec = {
scenarios: 1000,
steps: 12,
variables: [
{
name: 'taux',
dist: 'uniform',
params: { min: 0.03, max: 0.07 },
per: 'scenario',
},
{
name: 'capital',
init: 1000.0,
formula: 'capital[t-1] * (1 + taux)',
},
{
name: 'profit',
init: 0.0,
formula: 'capital[t] - capital[0]',
},
],
};
const result = await client.simulate(spec);
console.log(result.last_mean.capital, result.last_mean.profit);Example: HTTP API
curl -sS -X POST "https://api.subspacecomputing.com/simulate" \
-H "Content-Type: application/json" \
-H "X-API-Key: be_live_..." \
-H "X-Team-Id: 00000000-0000-4000-8000-0000000000b2" \
-d '{
"scenarios": 1000,
"steps": 12,
"variables": [
{"name": "taux", "dist": "uniform", "params": {"min": 0.03, "max": 0.07}, "per": "scenario"},
{"name": "capital", "init": 1000, "formula": "capital[t-1] * (1 + taux)"},
{"name": "profit", "init": 0, "formula": "capital[t] - capital[0]"}
]
}'Whether you use the portal, the Python SDK, the TypeScript SDK, or the HTTP API, the essential point is the same:
you do not rewrite different logic for each entry point.
You run the same model through different interfaces.
Why this really matters
This is not only an architecture concern.
It is also very concrete for an organization.
When several use cases share one logic, it becomes easier to evolve the system without multiplying versions, parallel tweaks, and gaps between implementations.
Integrations stay cleaner, maintenance gets healthier, and the organization spends less time reconciling several variants of the same logic.
In other words, sharing one engine is not only a technical elegance.
It is also a smarter way to keep analytical models alive.
Conclusion
An execution platform is not only a UI to launch a calculation.
It is shared execution infrastructure that lets one model be described, run, visualized, reused, and integrated without being rebuilt for every context.
That is exactly what the portal, the SP Model, and the Python, TypeScript, and API snippets show here: several ways to reach the same execution logic.
Available as a service.
That is precisely how Subspace fits in.