AI & Machine Learning
Nebula's predictable performance makes it excellent for numerical computing.
Why Nebula for AI/ML?
| Feature | Benefit |
|---|---|
| Deterministic Execution | No JIT pauses or GC spikes |
| Fast Math | NanBoxed floats, optimized ops |
| Clean Syntax | Express algorithms clearly |
| Low Overhead | Direct array access |
Mathematical Functions
Built-in Math
log(sqrt(16)) # 4
log(abs(-3.14)) # 3.14
log(2 ^ 10) # 1024
# Future: sin, cos, exp, log, etc.
Activation Functions
ReLU (Rectified Linear Unit):
fn relu(x) do
if x > 0 do
give x
end
give 0
end
fn relu_batch(data) do
perm result = []
each x in data do
result = result + [relu(x)]
end
give result
end
log(relu_batch([-2, -1, 0, 1, 2]))
# [0, 0, 0, 1, 2]
Sigmoid:
fn sigmoid(x) do
give 1 / (1 + 2.71828 ^ (-x))
end
perm values = [-2, -1, 0, 1, 2]
each v in values do
log(v, "->", sigmoid(v))
end
Softmax:
fn softmax(values) do
perm exp_sum = 0
each v in values do
exp_sum = exp_sum + (2.71828 ^ v)
end
perm result = []
each v in values do
result = result + [(2.71828 ^ v) / exp_sum]
end
give result
end
Vector Operations
Dot Product
fn dot(v1, v2) do
if len(v1) != len(v2) do
give empty
end
sum = 0
for i = 0, len(v1) - 1 do
sum = sum + v1[i] * v2[i]
end
give sum
end
perm a = [1, 2, 3]
perm b = [4, 5, 6]
log(dot(a, b)) # 32
Vector Addition
fn vec_add(v1, v2) do
perm result = []
for i = 0, len(v1) - 1 do
result = result + [v1[i] + v2[i]]
end
give result
end
log(vec_add([1, 2], [3, 4])) # [4, 6]
Scalar Multiplication
fn vec_scale(v, scalar) do
perm result = []
each x in v do
result = result + [x * scalar]
end
give result
end
log(vec_scale([1, 2, 3], 2)) # [2, 4, 6]
Matrix Operations
Matrix Representation
# 2D array as matrix
perm matrix = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
]
# Access element at row i, col j
log(matrix[1][2]) # 6
Matrix Transpose
fn transpose(m) do
perm rows = len(m)
perm cols = len(m[0])
perm result = []
for j = 0, cols - 1 do
perm row = []
for i = 0, rows - 1 do
row = row + [m[i][j]]
end
result = result + [row]
end
give result
end
perm m = [[1, 2], [3, 4], [5, 6]]
log(transpose(m)) # [[1, 3, 5], [2, 4, 6]]
Matrix Multiplication
fn matmul(a, b) do
perm m = len(a)
perm n = len(b[0])
perm k = len(b)
perm result = []
for i = 0, m - 1 do
perm row = []
for j = 0, n - 1 do
sum = 0
for p = 0, k - 1 do
sum = sum + a[i][p] * b[p][j]
end
row = row + [sum]
end
result = result + [row]
end
give result
end
Simple Neural Network Layer
fn dense_layer(input, weights, bias) do
# input: vector [n]
# weights: matrix [n x m]
# bias: vector [m]
# output: vector [m]
perm output = []
perm m = len(weights[0])
for j = 0, m - 1 do
sum = bias[j]
for i = 0, len(input) - 1 do
sum = sum + input[i] * weights[i][j]
end
output = output + [relu(sum)]
end
give output
end
# Example: 3 inputs, 2 outputs
perm input = [1.0, 0.5, -0.3]
perm weights = [
[0.1, 0.2],
[0.3, 0.4],
[0.5, 0.6]
]
perm bias = [0.1, 0.1]
log(dense_layer(input, weights, bias))
Loss Functions
Mean Squared Error
fn mse(predicted, actual) do
sum = 0
for i = 0, len(predicted) - 1 do
diff = predicted[i] - actual[i]
sum = sum + diff * diff
end
give sum / len(predicted)
end
Cross-Entropy Loss
fn cross_entropy(predicted, actual) do
sum = 0
for i = 0, len(predicted) - 1 do
sum = sum - actual[i] * log_base(predicted[i], 2.71828)
end
give sum
end
Future Roadmap
We're actively working on:
| Feature | Status | ETA |
|---|---|---|
| Tensor primitives | Planned | v1.1 |
| SIMD intrinsics | Planned | v1.2 |
| GPU bindings (CUDA) | Research | v2.0 |
| Autodiff | Research | v2.0 |
Performance Tips
- Use VM mode:
nebula --vm script.na - Preallocate arrays: Avoid growing in loops
- Use for loops: Faster than while for known ranges
- Minimize function calls: In hot loops
- Cache repeated calculations:
# Slow: for i = 0, len(data) - 1 do ... end # Fast: perm n = len(data) for i = 0, n - 1 do ... end