Cutout Research Library

Word count:

Go to the bottom for links. For help to use this library, go to 2012041222322422.

A collection of ideas, "papers" and stuff (programs etc.) I don't know to put somewhere else. I am the kind of person that is not supposed to do research but can not stop it because it is so fun. Have done this for some years now, trying to escape the mountains of notes by starting with empty pages over and over. What you will find on these pages are videos, tools and techniques I hope you might find interesting. Be glad you do not need to look through all the work I have done.

- What is the point with these long numbers?
Have you noticed that the four first digits represents the year?
It is like a pattern yyyyMMddHHmmSStt:

yyyy: Year
MM: Month
dd: Day
HH: Hour
mm: Minute
SS: Second
tt: Time zone (24-t)

Such numbers are useful because they are practically unique for the use of one person. You can also reconstruct a sequence of events later if you feel for it. When I have a such number, I do not need to come up with titles for every pice I write. If you got a number and want to find the article, search for it using Ctrl+F (works in most browsers).

I made a webpage that generates unique number such as those I use for headlines.
You can try it here

Wrote a piece of javascript that counts the number of words on this page.
You can copy the code by looking at the source of this page.

While I am thinking of a philosophy of this library of research and ideas, I thought I should inform you that this rabbit hole is going deep. Most of the reason to the urge to post so much stuff is because I feel I got something, but few people to tell it to, because there is so much distractions and instances that require people's attention. However, I do it also for my own pleasure, since I want a place in the world where I can control the content to full degree. I wish I could express everything I have to tell in one sentence, but instead I will rather take my time.

A hint to make peeking into this library a more comfortable activity, is to use the F11 button to make the page fullscreen. I suppose since, you are reading this, that you like reading and researching and is curious about what kind of stuff I am interested in. Sure you will also have more pleasure by finding a quiet, warm and comfortable place, perhaps with some coffee or a drink before you proceed. If your eyes get tired then rest them a few seconds by looking at the background image. The picture is of a made of a romantic painter and poet named "Carl Spitzweg" and the painting is called "The Bookworm 1850".

The original idea of making such numbers was a problem I had when I was programming. I have a problem with linking information that is spread across multiple files. For example: When programming I write a log, todo list or diary. I use different methods and techniques for different problems, which is not easy to express consistently in one file. I like to write, perhaps I will write some books one day, but for practical purposes you can't work like in a novel.

- If you can get rid of the restricting order in a problem, you increase the power of the solution with one magnitude.

Sorting out and organizing information is one of the major challenges in information theory. The reason certain constants or formulas are popular is they don't change for any complex situation.

A researcher was thinking of how to produce artificial leafs for harvesting energy solar cheapest as possible. He figured out that nothing costs less than 10$ per pound of weight to produce ( 4.54$ per kg ). What a nice thing to know. If you know how much it weights, you can guess how much it costs.

This is how I think when I make up special numbers to cross reference documents. Knowing certain properties or constants allow me to take shortcuts. If I generate a new number each time, I can follow the traces like a graph where each number represent a connection. Now I don't need to think of the order anymore. I can move a text from one document to another, copy it, modify and so on. I have tested it a bit, but still it remains to see if it works on a larger scale.

  1. If you want to try such numbers on your own, you can type date() into the Calculator

I have written some information about Boolean algebra (mathematics of 0 and 1s). Felt this necessary since my approach to it is slightly different than the resources I have found online. Besides, I want to explore terms in the direction where I find it useful.

This was the first document connecting to itself using numbers, and that time I was having the idea of something I call "helpers". A helper is a piece of information or a class in programming that knows how to do a certain thing. Unlike a dictionary, which is to inform people about terms used in discussions and in general, a helper is more specific and straight to the point, written in simplified language.

  1. If you are interested in learning more about Boolean algebra, go to 2012040110183822.

A curious thing I have not thought of, is that HAVOX symbols can be written like this:

H: ==
A: =>
V: <=
O: <>
X: ><
Y: -<
Y-1: >-
I: --

If you read it sideways, by bending your head to your right shoulder, you can see the lines form the first five letters.
A longer string can be written like this:

  1. If you want to learn more about what HAVOX is, go to 2012040916405823.

HAVOX symbols, is a kind of binary sequence that represents comparisons "similar to" and "not similar". If a person A believes something that also person B believes, we say their HAVOX relation is 0. If later the persons start disagreeing we can say the relation is 1. It is used to describe how the agreement between beliefs can change over time.

In HAVOX, the single bits are not so interesting as pairs and triples of bits. For example: The first bit can represent understanding of the current situation. The second bit can represent understanding of how the situation evolves. The third bit can represent predictions about the future.

This is the most general description of all situations in the universe. If you had enough information and intelligence, you could extract all information by having 2 HAVOX bits related to reality.

When I use HAVOX, I write down all 8 possible combinations of 3 bits and then eliminates the ones that are impossible. The ones that are left tells me which situations are possible. I can also use it to calculate a number I call Nilsen/Occam's number, which tells how a much complex a hypothesis is.

  1. If you want to learn more about HAVOX symbols, you can read more about it here.
  2. If you want to try out a multi-dimensional HAVOX editor, you can click here
  3. If you want to learn how HAVOX is connected to Boolean algebra, go to 2012041022480823.

Yesterday I figured out a rather interesting concept. When we have a dataset of what some persons like or don't like, we can extract possible "laws" from the dataset.

  1. If you are interested, you go to 2012041010563223.

One technique I have found very useful, is a way to browse one or more persons imagination, called "qubiting". The technique is very simple: Ask a question that can be answered either yes or no, but do not answer it. Start very generally, then work down to the specific, for example:

Is there any aliens on earth?
Is there any evidence of aliens on the earth?
Is there a way aliens could hide evidence about their identity?
Is there an alien spaceship on earth?
Is there any places where an alien spaceship could land and takeoff without any trace?
Is there any aliances between human and aliens on earth?
Does aliens force humans to cooperate?
Could an alien spaceship land in the middle of the ocean without leaving any traces?

This is like a binary search, only that the set is the world and not inside a computer. When you leave the answers open, you will soon start to see patterns that connects the questions together. Qubiting is also very efficient because it takes away the criticism or dominance of one person. It requires some training, but when yo get the hang of it, you will use it almost every day.

I am trying to make the practical notation I use in my calculator better, or at least understand it's use better. Mathematics got a tendency to subjectivize verbs, specifically for complex dimensions. Complex dimensions are just a piece of rules how a matrix (a table of numbers) is generated when they are multiplied. The secret of understanding this can be illustrated through one example, where you solve a problem and you get two answers:

a + bi

You calculate it using another type of complex dimensions, and you get the answer:

a + bε

What did you got? You got two answers, or more specifically one answer with two numbers in it. Nobody cares about what kind of dimensions you have.

If you think of each row in a matrix as functions:

x = x0*vx + x1*vy
y = y0*vx + y1*vy

Then by putting in a complex dimension you can describe these two in one equation:

x + yi = ( x0*vx + x1*vy ) + ( y0*vx + y1*vy )i

Since i2 = -1, we can move one number from the complex part to the real part. We can also move one part from the real part to the complex party by multiplying with i. If we have two types of functions like cos and sin:

x = cos(α)*vx + cos(α)*vy
y = sin(α)*vx + sin(α)*vy

We can move around until we get the answer we want (we want to describe rotation). We want to move sin(α) up and change sign, and sin(α) is in the i*i column and row. We also want to move cos(α) down, which is in the i*1 column and row.

x = cos(α)*vx - sin(α)*vy
y = sin(α)*vx + cos(α)*vy

This way we got two conditions:

i*1 = 1
i2 = -1

We can just specify the rules we want because complex dimensions is all about flipping the numbers around in the table. Therefore, in my notation I work on, I don't care about anything as complex dimension at all, but include it into the operator:

A **i B

This tells you that A is a matrix generated by a list of numbers that gives the same result as complex multiplication. The whole point is to be able to work with numbers only in lists, and if you want extra dimensions you need to tell the operators what to do. When another person reads the notion, he will be able to tell what kind of stuff is happening, he doesn't need to know what the numbers are thought to be. The advantage of everything is just numbers, is that you can copy the thing you are interested in and use it directly in a calculator or programming language that supports this notion.

I just got the idea of a new operator for inverting bitstreams:

a ! 0

This means to invert a at position 0.
This is handy when you want to invert a bitstream at a specfic point.

Hi! I am a lookup tool for dealing with Boolean algebra. The number above me is my identity, but if you look closer you can also see year, month, date and time. If somebody like me can't answer your question directly, I will give you a number which you can search on this page using Ctrl+F. You can also click on the links and your navigation history will be stored in the browser.

  1. If you are missing something here or find a mistake, copy the identification number and go to 2012040110260422.
  2. If you want to transform one expression to another, go to 2012040110301622.
  3. If you want to learn more about bitstream vectors, go to 2012040915170823.

Hi! This is the helper for missing items on this site. If you have noted down the number of the category you are missing, you can go to and send the number to Sven Nilsen with an explanation of what you are looking for.

Hi! I am the helper for transforming one expression to another.

  1. If you need to learn basic rules, go to 2012040110340222.
  2. If you want to learn how to invert an expression, go to 2012040123394822

Hi! I am the helper for learning the basic rules of Boolean algebra.
There are 4 basic operations: OR, AND, NOT, EXCEPT.

  1. If you want to learn more about OR, go to 2012040110370022.
  2. If you want to learn more about AND, go to 2012040110425422.
  3. If you want to learn more about NOT, go to 2012040111011122.
  4. If you want to learn more about EXCEPT, go to 2012040111062422.
  5. If you want to learn about how to derive XOR using EXCEPT, go to 2012040123463922.

Hi! I am the helper for learning the operation OR. If we have two states A and B that can be either "on" or "off", then "A + B" is an OR operation that always return "on" if either A or B is "on".

For example: A = chocolate and B = strawberry.
I want an ice cream that "A + B" means "either chocolate or strawberry or both.

Hi! I am the helper for learning the operation AND. If we have two states A and B that can be either "on" or "off", then "A * B" is an AND operation that return "on" only if both A and B is "on".

For example: A = blue eyes and B = long hair. There goes a person that "A * B" means "got both blue eyes and long hair.

Hi! I am the helper for learning the operation NOT. If we have one state A that can be either "on" or "off", then "!A" is a NOT operation that always return "on" if A is "off", or "off" if A is "on".

For example: A = strong arms.
I have fought a stranger which "!A" means "didn't have strong arms".

Hi! I am the helper for learning the operation EXCEPT. EXCEPT is written as subtraction, but follows some strange rules:

A + B - C = (A + B)*!C

For example: A doctor is going through with the procedure and treatment, except if the patient shows signs of recovering.

The EXCEPT rule disrupts the whole thing if the state is "off".

  1. To read about EXCEPT and XOR, go to 2012040111122422.
  2. To read about EXCEPT and paranthesis, go to 2012040111213622.
  3. To read about nested EXCEPT rules, go to 2012040123432622.

2012040111122422 2012040914504923
Hi! I am the helper for understanding the connection between EXCEPT and XOR. If we got two states A and B that can be "on" or "off", we can have two rules, one when B disrupts A and when A disrupts B:

A - B
B - A

If A and B are "on" at the same time then none of the rules are "on".
If we set up a truth table, it will look like this:


We can make one rule out of these two:

(A - B) + (B - A) = A*!B + B*!A

This operation is called XOR.

  1. If you want to learn more about truth tables, go to 2012040914555723.

Hi! I am the helper for dealing with EXCEPT in paranthesis.
When we have an expression like this:

A + (B - C)

We can't resolve the paranthesis like you are used to in basic arithmetic! The minus sign means an exception and the paranthesis control the scope of the domain of the exception. To resolve the paranthesis you first need to transform into the alternative form:

A + (B - C) = A + B*!C

If there are two paranthesis with same exception, we can put the exception at the outside:

(E - F) + (G - F) = E + G - F

2012040123463922 2012040914470723
Hi! I am the helper for deriving the XOR rule using EXCEPT.
By drawing a Venn diagram, we know that we can write XOR as:

A + B - A*B

Using the rule of transforming Boolean expressions into opposite, we can write

!(A*B) = !A + !B

We can now insert this into the expression:

A + B - A*B
(A + B)*(!A + !B)
A*!A + B*!B + B*!A + A*!B

We know that A*A! and B*!B is always "off".
When we remove these terms we get:

B*!A + A*!B = (B - A) + (A - B)
(A - B) + (B - A) = A + B - A*B

Hi! I am the helper for inverting an expression.

!(A + B) = !A*!B
!(A * B) = !A + !B
!(A - B) = !(A*!B) = !A + B

It is easiest to think of '*' as the inverse of '+'.
Take the invert by inverting each term.

Hi! I am the helper for nested EXCEPT rules. When there is an exception to the exception, this can be splitted into two rules.

A - (B - C)
A*(!B + C)
A*!B + A*C
(A - B) + A*C

Lacks reference
Hi! I am the helper for understanding the connection between EXCEPT and programming return commands. EXCEPT or a negative Boolean expression can be thought of something that disrupts the algorithm.

function myMethod(a)
 // This is a condition where this rule will fail.
 // Therefore, we need to return.
 if (a[0] == 0) return;
 // Do something.

If the rest of algorithm was A and the condition for disruption B, then we could just write

A - B

Lacks reference
Hi! I am the helper for understanding how XOR pattern in programming can make code easier to read. Write a function that calls all the functions and put the condition inside each function instead of outside. This helps you to organize the code in a way where you can see under what condition the algorithm operates under.


function doA()
 if (!aSelected) return;

 // Do the work.

If only one function does the work at a time, you can write them in any order.
This is called an XOR pattern of functions.

Lacks reference
Hi! I am the helper for solving equations using Boolean algebra. The best way to deal with equations is to write them in a form with '+' (OR) and '-' (EXCEPT):

A + B = C + D

We want to solve this for A:

A = C + D - B

Lacks reference
Hi! I am the helper for dependencies using Boolean algebra equations. When we solve an equation by writing a single term on one side, we can think of this as a function call that got the dependencies on the other side.

A = C + D - B

In normal programming languages, each line is executed one by one. We can use Boolean equations to define a way of executing that relies on the conditions. If a state is 0, it is not executed or is not yet done. If a state is 1, it is done.

B = 1
A = B

B is always executed, which means it counts as a starting point.
A is executed after B.

2012040915170823 2012041010414823
Hi! I am the helper that teaches you about bitstream vectors.
A bitstream vector is starting at 0 and changes up and down:


A more compressed way of writing this is a list of the positions when it changes:

  1. If you want to learn how to perform a NOT operation on bitstreams, go to 2012040915215623.
  2. If you want to learn about connection between bitstream and numbers, go to 2012040915193523.
  3. If you want to learn about conservation of information in bitstream vectors, go to 2012040915205023.
  4. If you want to learn about doing nested XOR operations on bitstreams, go to 2012040915235923.

Hi! I am the helper for understanding the connection between bitstream vectors and numbers. The number 12 can be represented as a bitstream vector:


We can write |A| = 12.
A - B in a bitstream vectors means something different than numbers |A| - |B|.

___------------_____ A
_____------------___ B
___-________________ A-B

------------ |A|
------------ |B|

|A|-|B| = 0

A number is a bitstream vector that removes all the places with 0 and then is compared to |-|

Hi! I am the helper for understanding conservation of information in bitstream vectors. What we mean about conservation of information is that all the numbers appear in same amount as in the calculation. When performing AND and OR on two bitstream vectors A and B, the total amount of information is constant if we look at both results.

[4,10] * [2,6] = [4,6]
[4,10] + [2,6] = [2,10]

When we are performing EXCEPT A-B and B-A the information is also conserved:

[4,10] * [0,2,6] = [6,10]
[0,4,10] + [2,6] = [2,4]

Hi! I am the helper for performing NOT operation on a bitstream vector. The bitstream got a number for each time it switches value, so we can invert it simply by putting in a 0 at the beginning of it:

![2,4] = [0,2,4]

If there already is a 0 there, we can remove it:

![0,2,4] = [2,4]

Hi! I am the helper for deriving IMPLICATION using an equation. Let's say B is command that depends on A being executed before it:

B = A

When we move B over to the other side zero at the left side:

0 = A - B

0 in Boolean algebra is always false, so we need to invert both sides to get the true law:

!0 = !(A*!B)
1 = !A + B

If A is 1, then B has to be 1.
If A is 0, then B can be 0 or 1.
This is called IMPLICATION and is written like this:

A → B

You can also write it in 3 other forms:

(A → B) = (A ← B) = (!B → !A) = (!B ← !A)

When the arrow points to the left, we read it as "If B, then possibly A".

  1. If you want to learn about extracting IMPLICATION from datasets, go to 2012041010460623

Lacks reference
Hi! I am the helper for nested implications.

A → B → C
1 = !A + !B + C

Lacks reference
Hi! I am the helper for deriving nested implications:

A = B*C*D
0 = B*C*D - A
0 = B*C*D*!A
!0 = !(B*C*D*!A)
1 = !B + !C + !D + A

This describes only the last part.
We need one equation for each step to define the specific order:

1 = !B + C
1 = !B + !C + D
1 = !B + !C + !D + A

Lacks reference
Hi! I am the helper for deriving multiple EXCEPT rule in two ways:

A - B - C
A - (B + C)
A*!(B + C)

Another way:

A - B - C
A*!B - C

Lacks reference
Hi! I am the helper for understanding precedence in Boolean algebra. We can write the precedence as a list of operators where the first got highest and the last got lowest:

[!, *, -, +]

Subtraction is special because it operates on the whole expression where it belongs to.

Lacks reference
Hi! I am the helper for understanding the boundaries in Boolean algebra. When we got two numbers, A and B, but don't know the bitstream vectors, we got following law:

|a-b| >= |a| - |b| <= |a|

Lacks reference
Hi! I am the helper for understanding symmetry of splitting and joining water bubbles: When we got two amounts, A and B, that can partly intersect but has following relationship:

|a-b| = |b-a|

Then it is possible that a = b or that they are complete separate. They can also change in size, like two small bubbles that merge to form a larger one.

Lacks reference
Hi! I am the helper for describing the perfect crime. When we got two amounts, A (things you want others to believe) and B (lies), then the truth C is an XOR operation between A and B:

C = (A - B) + (B - A)

Since the crime is performed in the past and others know only the effect of it, then the effect has to match with what could be happening if your story was true:

|C| = |A| = |(A - B) + (B - A)|

In addition, if somebody performs a control D with the past, you want it to match the story:

|D*A| = |D*C| = |D*(A - B) + D*(B - A)|
D*(A - B) = D*A*!B = A*(D - B)
D*(B - A) = D*B*!A = B*(D - A)
D*A = A*(D - B) + B*(D - A)

If your story covers all the controls, then D - A = 0. If that is true, then D - B = D.

D*A = A*(D - B) + B*(D - A)
D*A = A*D + B*0 = A*D

Lacks reference
Hi! I am the helper for understanding equality. If we got two amounts, A and B, which got the relation:

A = B

Then we can reformulate it to be:

A - B = 0

What if B contains all elements of A but got some extras? This does not violate the equation because if we move A to the other side:

0 = B - A

We know they have to be equal bit for bit.

Lacks reference
Hi! I am the helper of understanding the (-1)^2 = 0 in Boolean algebra. This is different from normal numbers, but can be demonstrated using following example:

A - 1 = 0
0 - 1 = (-1) = 0

Therefore, we can write the equation:

(-1)*(-1) = (-1)^2 = 0

Lacks reference
Hi! I am the helper of solving equations with absolute subtraction.

|A| - |B| = |A - B|

The equation above tells us something of the relationship between A and B. We can write the absolute value of subtraction in complete form and use insertion:

|X - Y| = |X| - ( |Y| - |Y - X| )
|X - Y| = |X| - |Y| + |Y - X|
|A| - |B| = |A| - |B| + |B - A|
0 = |B - A|

What we found out was that all elements in B is also in A.

Lacks reference
Hi! I am the helper for understanding bitstream division. A week is divided into 7 parts and each part is called "day". We can use bitstream division on all amounts that got the relations:

N*|A/N| = |A|

The amount |A/N| is thought of as "any of the N parts of A". For example:

7*|week/7| = 7*|any day| = |week|

The length of any day is 7 times shorter than a week.

Hi! I am the helper for calculating nested XOR operations. If you have a sequence of 0s and 1s, you can calculate the nested XOR operation on all these bits, by adding all the 1s together and then take modulus 2:

|A|%2 = A0 XOR A1 XOR A2 XOR ...

This trick is specially useful when calculating with bitstreams, because you can't see the bits directly.


|[2,4,6,9]| = (4-2)+(9-6) = 2+3 = 5
5%2 = 1

The answer is then 1. If the length of the bitstream was 6, the answer would be 0.

  1. If you want to learn more about modulus, go to 2012040914334423.
  2. If you want to learn more about XOR, go to 2012040914504923.
  3. If you want to learn more about deriving XOR using EXCEPT, go to 2012040914470723.

Hi! I am the helper for understanding modulus. When you divide 10 by 3, you get 3.33333... In integer division, we round down to nearest whole number, so when a computer calculates with integers:

10 / 3 = 3

If we want to know how much of the number 10 is lost, we can use the modulus (%) operator.

10 % 3 = 1

If the programming language supports it, you can take modulus of float data types:

12.4 % 3 = 0.4

2012040914555723 2012041010434223
Hi! I am the helper for understanding truth tables. A truth table shows all combinations of 0s and 1s by showing the binary representation of numbers from 0 to 2n-1. To the right one or more extra column is added for functions you choose (in mathematics this is called "arbitrary"). For example, if you have 3 variables A, B and C the table will look like this:

ABCA - B - C

The last row is 23-1 = 8-1 = 7.

You can also write a truth table in a more compressed way, because if we assume the order of the bits they are written, we can just write the last column:

A - B - C = "____-___" = [4,5]

It says that the truth value starts at binary 4 (100) and ens at 5 (101). All other positions are 0. This is what I call a "bitspace", which is a bitstream vector on combinations.

  1. If you want to learn more about bitstream vectors, go to 2012040915170823.

2012041010460623 2012041010563223
Hi! I am the helper for learning about knowledge extraction from datasets. Here is what a fictional family "Greenhound" likes of food:

woman + man + swedish + cheese + pizza + wine + beer - fish - shrimps - snakes - snails

There are both woman and men in the family, so we need to specify this possibility. The properties with negative sign is what nobody in the family likes.

Imagine a large sets of such data, then you are challenged to find out if a certain law is true. We can write a law as an expression A → B, read as "When A happens, B is true."

(A → B) = (A ← B) = (!B → !A) = (!B ← !A)

When the arrow points to the left, it means "possibly", so A ← B means if B, then possibly A. We can never know that a law is valid forever, we can only find out if it is false by looking for exceptions.

The truth table returns 0 when the law is false, with a 0 for minus sign (-) and 1 for a plus sign (+):


If one of them are missing, then jump over it and continue searching. Take a nested AND operation of all rows, and you will find out if the law is true for the dataset! Written in bitstream format:

A → B = "--_-" = [0,2,3,4]
  1. If you want to learn more about IMPLICATION, go to 2012041010371223.
  2. If you want to learn more about bitstream vectors, go to 2012041010414823.
  3. If you want to learn more about truth tables, go to 2012041010434223.

2012041017081123 2012041022480823
I am the helper for introducing HAVOX symbols and thinking. This is a special kind of reasoning to study the beliefs about objects or situations.

For example: "He stole my car!" is an expression that contains 4 claims:

He (could be she, it, they, ...)
Stole (could be crashed, borrowed, parked, cleaned, ...)
My (could be his, her, theirs, not mine, ...)
Car (could be bike, truck, train, plane, ...)

If we know that this situation applies, to a car, we can eliminate that option so we got 3 claims "He stole my car!". You can also combine two claims together, for example "my car", "their truck", "my truck", "their car". The general thing is that every situation in the universe can be boiled down to 3 claims:

Input → Function → Output
My Car → Was Stolen → He Did It

We can make it longer, but for most practical applications 3 is enough. Each level can be assigned a value 0 if two beliefs are similar or 1 if two beliefs are different.

The strange thing is that if two people believe the same Input and Function they should come to the same conclusion. This applied backward also, because if two people believe the same Function and Output they should figure out the same Input. We can write this as a binary number where there are no neighbor 1s both 1s, like "011" or "110".
(You can also invert the bits, thinking of it like a distance, but I use this notion here for consistency with bitspace calculations). If there are two following 1s, then they all should be 1s. Another way of writing this is to look at the change from each level to the next, this way we don't have to worry about the bits being inverted or not:

00 = -
01 = <
10 = >
11 = =

000 == (H)
001 => (A)
100 <= (V)
101 <> (O)
010 >< (X)
011 >-
110 -<
111 --

When two people disagree, it does not mean one belief is violating what is possible. Every combination in HAVOX is valid as additional explanations.

  1. If you want to learn more about how HAVOX is used in practical situations, go to 2012041013135323.

Hi! I am the helper for learning to use HAVOX symbols in practical scenarios.

For example, if you find out that cows in Denmark (input) spread a desease (function) so they die (output) and there is a cow in England that gots ill of mysterious reasons. You write down different scenarios you think is plausible for each level.

Input = {An English cow, A transported cow from Denmark to England}
Function = {Got the same desease, Got another desease}
Output = {The cow will die, The cow will not die}

Then you ask two people of what they believe:

Facts (what is possible): (English cow + Danish cow) → Got the same desease → The cow will die

John: A transported cow from Denmark to England → Got the same desease → The cow will die
Matt: An English cow → Got another desease → The cow will not die
Luke: An English cow → Got the same desease → The cow will not die

John vs Facts = --
Matt vs Facts = <=
Luke vs Facts = -<

Notice that we don't know the actual situation, the only facts we got is the law of what is possible. Luke believes something that is inconsistent with the facts, you can see that because he has two zeroes.

Lacks reference
Hi! I am the helper for correcting a belief against a more certain rule. Usually, when you test against a situation, you can calculate 0 or 1 for each level and compare those between the rules. When you don't have situations to test against, you still can update beliefs by doing a trick. If the intersection between two beliefs compared to one of them are the same or 0 in 2 following layers but not all, then one of the beliefs must be wrong! This means we can just smash two beliefs together and see if they are consistent. For example, if we believe A → E → F we can learn from another belief AND with each term:

A → E → F vs (A + C - D) → E → (F + G)

(A*(A + C - D)) → (E*E) → (F*(F + G))
(A*(A + C)*!D) → E → F
(A - D) → E → F

When we compare the new belief vs the old one, we get an invalid HAVOX >-. This means the old belief and the other can not both be true. Another example:

H → F → H vs (A + C - D) → E → (F + G)

(H*(A + C)*!D) → (F*E) → (H*(F + G))
(H*A + H*C - D) → (F*E) → (H*F + H*G)

We get H (==) compared to the old belief and therefore they both can be true. If for example F is "physics" and E is "astrology" then F*E = 0 and a special law applies called "HAVOX collapse". This is because "physics" and "astrology" can not both be true under any circumstances.

  1. If you want to learn more about HAVOX collapse, go to 2012041017114723.

Hi! I am the helper for learning HAVOX collapse. We can write a 3 layer HAVOX as an equation of nested IMPLICATION:

A → B → C
1 = !A + !B + C

In our case, B is always 0:

B = 0
1 = !A + 1 + C

Since we got 1 at both sides of the equation, the value of A and C does not matter. This is like pulling out the plug of HAVOX. Two beliefs that seemingly are plausible and compatible for a long time, can turn out that one of them is completely wrong. This is because beliefs are, after all, just beliefs X and the truth with big T might turn out that T*X = 0.

Some people use a technique when the belief collapses to jump over to another one and hold on to it until it collapses. There are that many possible beliefs that you can call them "swimming in the HAVOX ocean". The point with HAVOX is to find ways to correct or test beliefs and consider all other possibilities, not to stick to a single without evidence.

Lacks reference
Hi! I am the helper for teaching Nilsen/Occam's Number or "non". We got two beliefs and compare them with HAVOX:


We want to calculate the complexity of these beliefs, the only thing we need is the number of layers. Sincere there are 5 of them, we can use the following formula:

fib( 5 + 2 ) = 13 non

This tells us how many types of consistent beliefs we could make at this level of complexity in worst case. The efficiency of Occam's Razor is the difference from one complexity level to another. Since the Fibonacci sequence is the sum of the two last terms, we can write the increase of complexity by one level as:

fib( x + 2 ) - fib( x + 1 )
fib( x + 1 ) + fib( x ) - fib( x + 1 )
fib( x )

For example x = 8 got 21 "non" more than x = 7.
The ratio which the increase of "non" compared to the total in previous layer approaches the golden ratio conjugate 0.6180339887498949 This means that approximately each level is a gamble of hitting a target with 2-φ or 38.2% accuracy, in a row.

The "non" number is a two edged sword: It estimates both possible compatible beliefs + number of possible contradicting beliefs if one collapses all the others. Finding out which is collapsing the others will save you a lot work. This can be done by a computer by extracting knowledge from datasets with a fairly easy algorithm.

  1. If you want to learn more about HAVOX, go to 2012041017081123.

Lacks reference
Hi! I the helper for teaching the identification of HAVOX beliefs.
We have a set of beliefs:

X = {X0, X1, X2, ...}

Each person can have their own belief, but two or more persons can share beliefs within some limits. The problem is that a belief is not a physical object, it is like imagining the "spirit" of a team or group of people. What makes a belief unique?

When we perform comparisons on beliefs (from now called objects), we can assign each comparison test a value in a set C.

C = {C0, C1, C2, ...}

Each comparison is static, they all take two arguments C(A,B) and returns 1 if A and B are equal, if not it returns 0. The result we get from the comparison test can be thought of as a relation between A and B. Since we can perform multiple tests between A and B, we can use multiple bits, one for each test. Here is all possible relations with 4 tests:

0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111

Let's say we know the relation between (A,B) and (B,C). To find the relation between (A,C) we take the XOR operation:

(A,B) = 0101
(B,C) = 1110
(A,C) = (A,B) XOR (B,C)
0101 XOR 1110 = 1011

Look at the first bit: 0 XOR 1 = 1 It is logical that if A differs from B in one way, but B is similar to C, then A and C has to be different. No matter how many tests or objects we have, we know we can use the XOR operator to navigate from one to another. If C has a neighbor D, we can find (A,D) because we know (A,C) and so on. If two objects C and D got the same relation to A, then C and D are not unique with respect to A.

Lacks reference
Hi! I am the helper for learning how error correcting works. Here we have a table of 0s and 1s in couples and a corresponding checksum using nested XOR operations:


When transmitting this message from one computer to another, we can check for errors by recalculating the checksums. If we get the wrong checksum in a row and a column, we know where the error is:


Not only this, but since the XOR is performed independently on the bits in each cell, we know the specific bit:


You can perform this on larger tables and on larger chunks of data, for example 32 bits in each cell.

Lacks reference
Hi! I am the helper for doing AND operations on advanced expressions. The general rule, is that you can move the negative terms outside the paranthesis:

(A - B)(C - D) = (A)(C) - C - D = A*C - C - D

- What is the best way to read this page?
Hit F11 to get fullscreen, get something to drink (water is better than nothing) and something to note on if you get ideas.

- What is a "helper", do they do anything? If an article starts with "Hi! I am the helper ..." then it is intended to explain something. Because of practical reasons, helpers got one number for each reference. Since the numbers of helpers are coupled, one for the title and one for the link, it is a way to see how it is referred to and from where. I use this technique because as the library grows, I forget what's in it. Helpers that are written but not referred to yet got a text "Lacks reference" in bold.

- What about those not-helpers, what are they? If not a helper, then it is an idea, a note, a small project or an experiment. - Why is everything bunched together on one page using a number? Because so you can find it again using "". It was designed for practical purpose, to store and receive information. You can also save the whole page on your local disc for analyze the library.

- How to I get back to the previous article I wrote?
You can use the back button to get back to the previous thing you read. Use it. Now.

I experimented with the layout of this page. Figured out that you can add shadow to text, which is cool. I added some shadow in a style I think looks a bit like an old type writer, or a book that has been found in water. Most of the fun in life is to make the world around you look more and more like your imagination.

I have tested a space filling curve called "Hilbert's Curve". It is generated by a recursive algorithm. One property of it that is very nice is it conserves grouping. If you have a bunch of numbers and color the places along this line, you will get a kind of map full of squares. If you place objects around in a map like that, you can later get it out preserving approximately the order.

  1. Click here to see a javascript demo.

I was reading some documentation on a website and thought of all the strange words that the author used. Then I got the idea: What if I filter out the words that are common or neutral? Can I make a list in chronological order when words first appears that are not common? This could be a quick way to measure how specific an article or a webpage is.

I decided to use javascript and make a list of words I recognize from some articles. What surprises me is the massive amount of words we use for expressing things.

Another thing that I noticed: When people write they often use opposite terms, if the word "writer" occur then the next thing is likely to be "reader".

It can also be used as a tool for people that are learning a new language.

When it comes to the name, I just took the first the came to my mind, so I called it "Word Counter".

  1. Click here if you want to visit the word counter.

In programming, often it is hard to read the code to find out which methods that are called where. I made a javascript webpage where you can paste in the code, set start line number and get a list out with the calls. It ignores constructors and function calls with a "." in front of them. The reason is because I usually know I am in the right file, so methods on other classes are unecessary to know.

I got the idea after making the tool that counts words. Sometimes we are just overwhelmed by new information and we need a quick way to find things. There are plenty of possibilities using expressions. One nice thing is that with a webpage, the tool is accessible everywhere.

Another thing I discovered, was the first practical application of Boolean equations. In regular expressions, it is hard to write certain matches, specially if you want Boolean subtraction. What I did was to write 3 expressions instead of 1. The first is the most general one, and then I subtracted the specific I didn't wanted to get only functions from same class.

S = A - B - C

If I want to include one of them later using a checkbox, I can just ignore the match. This type of thinking, with a general set and then take out the things you don't want, is very powerful. I just named it "Call Methods" until I get a better name or want to make a new version of it.

  1. Click here if you want to try the "Call Methods".

I have an idea which I am wondering about.
If A and B are amounts then implication is written like:

A → B

Can we write any function like this?
If a function f is an operator on a set, such that:

f*A = B

Then I can override the operator "*" to make it possible to write like this in programming languages. Btw, I have made it possible to write Boolean algebra on bitstreams directly in C#. The function has to be written before the other in order to call the right operator. I did an experiment and it worked.

The function f can lookup the objects that are tied to the bitstream and do filtering. For example, you can find all humans in a group that are taller than a height and so on.

This is a new way to program that looks a lot more like mathematics than computer instructions.

I am thinking about the implementation of functions in Boolean algebra. One challenge is to write the functions in a way they don't need to be rewritten for any type of object. For example, a human got height, so how do we access the height?

One possible way is reflections, which is a way to access properties at runtime by knowing the name of the property. Another way is to use SortedList as a kind of general replacement for a normal object.

I have done some experiments with SortedList, for example to add Boolean properties automatically to object. It works, so I thought maybe I can develop a sort of database where one can add and remove properties to any object. I was a bit skeptical about writing functions operating on SortedLists, but perhaps the new notion of Boolean functions will solve this.

I have now done some more work on SortedList, and the results are good. The C# compiler executes multiplication from left to right, so that when you have two functions f and g:

A*f*g = (A*f)*g

Made some more tests, now I think I have proper names, I think I will call the whole thing "Groups":

  1. Property - This class represent a bitstream which again represents a group. It has no knowledge of the size of the entire group or which data set it is associated with. It is the purest thing you get to what a "Set" would mean without including the data. Still, it is very powerful since you can combine Properties to make new ones. This is done without moving the original data at all and the bitstream calculations are very fast. You can use Boolean algebra on it, even in C#.
  2. PropetyList - This class is based on a generic List, but ties Property with the generic type. The properties are treated like extra layers, so you can not make Boolean functions, only use Boolean algebra. This class gives you an array of objects for a given property, so you can think of it as a List + Boolean algebra.
  3. Member - This class is based on a generic SortedList, but with int as key.
    The point with this class is to use the same property ids to access the data within the members.
    It is still a generic class, you need to define of which type of members to use.
    It also makes it possible to overload * operators and create Boolean functions.
    Use it together with Groups or MemberPropertyList.
  4. MemberPropertyList - This class is based on PropertyList, but limited to classes of Member type. It adds a lot of functionality, one thing is that you don't need to thik of the properties as extra layers. You can extract a list of values, like a column in a spreadsheet. This feature combined with indices of the same list makes it possible to create Boolean functions.
  5. Groups - This class is based on MemberPropertyList, but general to all objects. The functions and rest of the classes will be build around this one.

One thing that surprises me is how many layers of features you need to make it practical. The bitstreams are ok, they are essentially just a way to calculate stuff. The challenge is when you want to tie bitstreams toward objects, you need to integrate it with the programming environment.

Groups is now turning into a very efficient command line tool. A technique that surprised me positively is stacking commands downward in a text box. The commands for retrieving data can feed the next command. If there is an errror then you can just do something to fix it.

For example, if I had a program like this for copying a file from one folder to another, it could look like this:

get <file>
mov <folder>

You write it forward, but it executes backward until it reaches a get command. The actual data it executes is the last received, because after the mov command is executed the result changes. The above program in transformed into:

get <file>

The result shows if the file was successfully moved or not.

So actually, the program first runs forward to collect the data, then backward to execute commands. This looks quite like a programming language!

One thing is to use the "Groups" syntax as an interface, but can it actually be executed as a program, by a compiler? I can try to make 4 basic princples:

  1. When there is no data and the next commmand requires one, it jumps one line up.
  2. When a command is executed successfully, it jumps one line down.
  3. When a command fails, it stops and displays the error to the user.
  4. The user can correct the error and let the program continue.

In this programming language I want get some data and perform some operations on it. There is no if or loop conditions here, the data themselves tell when I am done. If the data are not changed, I can set a flag on them so I do not do the same operation twice. Properties inside tags can be keywords or functions of some kind.

get (<all> - done)
// Do single object task.
// Mark when done.
mod done:yes

I think I will develop it first as a user tool but keep this possibility in mind.

This is fun. After some further testing with Boolean algebra on bitstreams, where a subdivision is a technique I named simply "Groups", I find it more flexible and powerful than any query language I have ever heard about. It is not feature rich yet, but it can be adopted to any kind of functionality.

For example, when I write a 2D physics experiment, I might want to get the closest point to the mouse. The problem is, when writing a routine in C#, I have to prepare the arguments if there is a specific groups of points. When I write a Property for Groups instead, I can easily filter out or add points on the fly to the argument. I can also invert the property, so I get all points that are not the closest to a certain point, and so on.

The applications are numerous and the code looks very clean with just multiply signs instead of calling functions with lot of arguments. C# is better to use when you create temporary data, but this again can be implemented as Properties. I think that a powerful object-oriented language and Groups will fit together.

Maybe Groups is part of a true 4th generation programming language, close to human natural language. It has abanded objects, because objects are kind of putting data into boxes which makes it hard to move them around. This was not done on purpose, but it just get more power when you remove that limitation. Also, instead of defining 40 fields that one object might have or not have, you define it Properties universally. It is expected that an object can have 1 property, another can have hundreds and still both can be in same group.

A property can be an expression of a more advanced composition, or it can simply contain specifically selected objects. Compared to classes, it is more powerful since it can be changed dynamically. Today, the compiler groups the objects by class in memory and this is very static. The purpose with Groups is not to only allow dynamically grouping, but to make it really fast.

In Robin Hood's Nottingham, there are 3 groups of people:

  1. One group of dumb and poor people
  2. One group of good and smart people
  3. One group of liars = The Sheriff + King John

Each time two of them trade, there is the same oppertunity to make a profit. For example, when dumb people trade with smart people, the chance that dumb gets the profit is 25%. When smart people trade with liars, none of them get profit since the lie is exposed, so that gives them 0% each. We make a table of all the possible trades between the groups:


In this town, the economy is unbalanced, because the dumb people will spend more than what they earn. The only way they can survive, is by charity from the smart people, but eventually all the money will end up in the liars pocket. Robin Hood wants |Income| - |Cost| to be the same for all groups. But Robing Hood can not print fake money, he has to steal from the rich and give to the poor. What ideal income and cost should he pick as a measure of his success to share the wealth of Nottingham?

Robin Hood discovers a secret: He can use the sum of the diagonal as the ideal income and cost.
Quckly he sets up a table with a fourth column to include himself, but he is not restricted to the ideal income and cost:

NottinghamDumbSmartLiarRobin HoodIncome
Robin Hood-75751000100

"Hmm, there is something wrong here... I can't earn -75 from the poor!" So he moves that number to the income of the dumb. "Hmm, I don't have to take 75 from the smart ones and later give 25 back..." So he subtracts 25 from 75 and gets 50.

NottinghamDumbSmartLiarRobin HoodIncome
Robin Hood0501000150

As long as he takes 50 from the smart and 100 from the liars, he will keep the economy in balance.

I want to make an algorithm that thinks the same way as Robin Hood:

  1. Use the diagonal as ideal income and cost
  2. Put in the difference from the ideal income and cost in extra column and row
  3. For connected cells, subtract the least sum from both cells
  4. Use the result as the necessary adjustment to keep the economy in balance

I like to think of this as a "Robin Hood" balance algorithm because it describes what Robin Hood would have done.

Lacks reference
I think there is an error in this article since A*B+B = B. Hi! I am the helper for understanding how you can calculate number of possible expressions in Boolean algebra. If we have 3 properties, A, B and C, then we can write a general term as:

A*B*C + A*B + B*C + C*A + A + B + C

This correspond to a binary code:

111 + 110 + 011 + 101 + 100 + 010 + 001

Notice that there are 1 of threes, 3 or twos, 3 of ones.
Perhaps you recognize this pattern from Pascal's triangle:


The last number is omitted, because if every term is removed, we got 000.
Each of the "ones" can be inverted or not like A or !A.
We got 2b for b "ones", pluss removing the term, which becomes 2b+1,
for one single term only.

The number of terms for b "ones" is given by the binomial function, which looks up numbers in Pascal's triangle. For my calculator, I made a function bin(x) that returns a row in the Pascal triangle. Now I can calculate the number of possible expressions for x properties:


The 4 first numbers in this sequence is:

3, 75, 273375, 83224657051875

With only 4 properties, we can make 83.2 trillion expressions.

By taking the logarithm, you can calculate higher numbers. For 73 properties, you can make 101 066 180 471 412 023 200 000 000 expressions or groups. This is a number with over 1 trillion trillion digits.

Each of these groups can be unique, if there exists data to support it. Because there are a such large number of combinations, the data has to be equally large to return unique results. For most practical scenarios for humans, there is few data set that gives guaranteed unique results per expression for more than 4 properties.

I am now working on a software I call "Researcher" that relies on Boolean algebra for data request. The software is not yet available for public since it is in the early development stage. Today I added a feature for sending emails to recipients with a "Mail" property:

get Mail
mail <title>, <message>

When you hit Enter after "do", it will pop up a window that asks you to configure the mail username, password, etc. At the moment I show default settings for Gmail accounts.

Researcher is all about breaking loose from simple lists. You can tag all the data with as many properties you want. Later when you want to use the data, you can for example send email to all your friends with one leg, and invite them to a one-leg party:

get Mail*OneLeg
mail "One Leg Party!", "This Saturd|...

Or, you can invite those with one leg or one eye:

get Mail*(OneLeg+OneEye)
mail "Pirate Veteran Party!", "This Halloween, we are going to scare some ki|...

Researcher is designed of the philosophy "Get the data to the user, NOW!".

A binary sequence can of course represent a number, but it also can represent type of expression:


Each term itself is start of a new binary sequence, because each term can be inverted, like !A vs A. If you restrict the possible expressions to the AND form, where each property is multiplied, you can calculate possible expressions. We use the trick that A*B and A*C are binary systems with same number of bits. This makes it possible to multiply with the binomial function.


There is a lot easier way to write this:


You can try it out in the calculator.

The problem is that expressions are not restricted to AND form...

This leads us to something called "interference". You can think of this as when two type of expressions, like "A + B*C" is written together in OR form, then there is a corresponding type of interference, which can be written like a table:


The type of interference is given by the AND operations between the two types of expressions. Yet I have not figured out the exact formula, but at least it makes sense trying to understand a such table. It is kind of confusing because we have more than one binary system:

  1. The binary system that makes up types of expressions in AND form.
  2. The binary system for inverted or not-inverted for each type of expression.
  3. The binary system for interference between two types of expressions.

Since there is a binary system for inverted and not-inverted in a type of expression, then there has to be yet another binary system for the interference. This means the type of interference has a binary system that will give us the interference. When calculating the totalt amount of possible expressions, we need to make sure that each type of interference is included. Still, we cannot count interference of one type of expression with itself, only with other types of expressions.

Interference between other terms, in 3 bits:


And here for 4 bits:


I think I need to study closer the connection between the binomial function and intersections.

I am working on a 2D game engine that takes advantage of Elemento graphics and Boolean algebra of Groups. It uses OpenGL to draw the buffer to the screen, just like Stickman and Elemento. I have used Stickman scene as basis for games before, but this time I'll use Groups instead. Here is a screenshot of some eyes following the cursor:

Most of the work took place in Elemento, because I could use the "MousePos" expression function. This is all the code it took:

The IDE I use is SharpDevelop, but the Game engine will be usable for Visual Studio and C# Express. There are 6 libraries to add as reference to your project in your IDE:

AntiGrain.NET.dll, ElementoLib.dll, GameLib.dll, GroupsLib.dll, OpenTK.dll, OpenTK.GLControl.dll

The namespaces to import is as following:

using GameLib;
using GameLib.Helpers;
using ElementoLib;
using GroupsLib;

All you do is to make a new game, is create new Windows Form and inherit from GameWindow. Then use the helpers to make whatever you want to do in the game.

A helper is concept I am using that allows me to write lot of code efficiently. The philosophy is that above a certain level of complexity, the actions are focused on the application and the user. It does not matter that much exactly how a task is performed, but what you want when you need to do the task. A helper cares about what happens in the application, therefore it takes a GameWindow as argument to the constructor. What it actually does, it to map the properties of GameWindow to private properties. Because of this, I often write new code first in the window, then create a helper without the need for modifying the code. It also allows the underlying structure of the application to change without affecting the final code in the application. In addition, I use all the techniques one usually see in programming, but the only thing I think of as progress, is the helpers. Almost all the time I spend programming goes to programming helpers, but the percentage of change totally decreases. They are kind of mini-applications with each being expert on doing different things.

The data structure of the Game engine is Groups. This is a very powerful technique to separate algorithms and data. The name "Groups" is in plenum because what the class really does, it to make groups on the fly for special purposes. You can store anything in Groups, for example an ElementoDocument, but it has to be stored as a "property" to an object. You can choose any name of the property and later request the objects with that property. For example Name - Flower means "all objects that got a name, except those who are flowers". Groups uses Boolean algebra on bitstream vectors, which is very fast compared to looking up each object. The query search is abstracted away from the values, so you can pass a Property object to an algorithm to perform a task. Actual calculations happens when you do the query, so there is no performance penalty after you got the Property you need. The actual code in C# might look like this:

var prName = groups.GetProperty(-1, "Name"); // Look it by property name instead of direct property id.
var prFlower = groups.GetProperty(propFlower, "Flower"); // Looks up flower by property id, if it's not -1.
var prop = prName - prFlower; // Subtract objects that are flowers from objects that got name.


One surprise for people new to Boolean algebra is that when you combine properties, you get a new property. It is kind of like magic, when we think of objects, we do not worry about how we think, we just think of them. With the Groups technology, you can do full queries of Boolean algebra using +, -, *, !.

It also a fact that in games, we deal with objects that are in dynamical states, where each object can have more than one state. In programming, this is often solved by making a loop and then lots of "if" control blocks to filter out a certain state. Such code is very hard to read because everything is mixed up. With Groups, you can handle each type of object (which you decide) separately.

The Game engine creates an object for each Elemento figure. When it has loaded a figure it returns an object id, so if that is figure is important, take care of it. You do not loose the figure because you can always get it from the Groups, it is just to know who is who. For example, if you load a background picture, it might not be that important to take care of the object id.

The Game engine creates properties in the Groups that it uses internally. This is where the helpers does the work while you can focus on things you want to do. However, it is nothing that prevents you from messing with it.

  1. If you want to learn more about Boolean algebra, go to 2012040110183822.

I am looking into the binomial representation of binary sequences or just "binomial vector". Each position in the vector represent a number of "ones" in the binary sequence. Since A*B = B*A, there is a connection between the operator we do and the number of "ones" in the answer. Our goal is to find a way to calculate the binomial vector in the answer without doing the actual binary operation. A binary operation is very simple, but when you put two binary sequences against each other it gets a lot of work. The point with binomial vectors is to reduce the work, but I don't expect it to work for more complicated problems.

The two most binary operations are AND and OR.
Here is how it looks like if you do binary operations for AND with 3 bits:


When working with binomial vectors, a position is a bitspace, running of X number of "ones". A bitspace works just as fine with Boolean algebra as single bits, even the interpretation of it is more rich. We are not interested in the whole bitspace, only where the address correspond to a number of "ones", like 101100 = 3. Instead of writing 000 001 010 011 100 101 110 111 we write 1,3,3,1.

The 3 bits AND operation, same as the table above:


3 bits OR operation:


What I see here is a lot of reoccuring numbers, such as 0, 1, 3, and 6. Perhaps you can spot some symmetries as well, if you look closely.

One thing that strikes me, when working on the 2D game engine, is that I really don't like games with points or time pressure. Even I do not intend to start on big projects, my thoughts wonder around, since game development is a subject that interests me. To me, playing around and testing the limits of the game and exploring the unknown what I think of as fun. It is so easy for a programmer to apply rules that are not necessary. Points can be used as a kind of measurement when it is hard to do it visually, but I do not see the point with abstract points. This is not my expert field, so I will not spend my time critizing something other people know a lot better. Instead I will focus on the positive aspects about lack of points and time pressure.

When I was a boy, there was no such thing as a reward when playing individually. I made my own rewards by accomplish something, to know that I could do it. Such things does not have to be difficult, but if you look at the clock all the time it makes you feel bad. This is a fresh idea, I do not know what to build around it, but I'll keep it in mind.

This is one idea I have been thinking of, but I have not quite settled it completely. It is about thinking of procedural tasks in a different way than just going through. First, we define a goal that if everything was OK, it would be no problem to do it. Then we define sub tasks of the goal to do if it's not OK.

main goal
sub goal 1 // if main goal fails
sub goal 2 // if sub goal 1 fails

Each sub goal must be contructed with the purpose of making the previous goal possible. This can be written as pure actions, since an action changes one state to another.

get food from fridge
buy food in the store
go to restaurant
visit relatives

The actual goal here is not to eat, but to not get hungry. Sometimes we can remove them from the list after we have tried them, so it changes while we perform the tasks. This makes it possible to work toward a long term goal without getting confused between many options. It is also strangely different from how programs are executed, where the final state is not know. One thing that makes this more robust than classic programming, is the ability to construct tasks on the fly.

Now, we can construct a kind of intelligence test. If one human and one AI is given a task, the main goal, and are then free to do what they can to accomplish that goal, and if the AI performs equally or better than the human, then it can be said the AI is intelligent. The AI must be able to construct its own tasks and execute them. This requires understanding of the environment and prioritizing from general to specific tasks. It may also that the AI needs to think abstractly, because the resources to perform a task might not be complete available.

2012042717281222 A function is something that takes an input and produces an output. Can we construct a function that does not have clearly defined data? As an example, I thought of a robot modelling kit:

Each part got some joints that can be connected to other joints. As long as the parts uses the same standard for joining, you are free to design any "function" of the parts. The user of a such modelling kit understands the functions and can assemble a figure to meet certain requirements. There is no need to "black-box" a function, the actual use of it is because it sets consistent standard for input and output. Just like numbers returns from a function and passes to another, the joints transforms physical forces from one part to another.

If two parts are not connected, they can not exchange forces unless they collide. We can call the forces transmitted through the joints as "internal" and forces by collision as "external". If there is no external forces, then there is no internal forces unless it can store energy in a spring or a battery. Internal forces are triggered by external forces, which either stores the energy or releases it in movement or rotation.

Imagine that we could make a world of a robot modelling kit. What is exactly a "function" or a "part" in this world?

  1. Joints
  2. Internal momentum
  3. Differential description

A differential description is how the relations between the joints changes over time relative to each other. Internal momentum is the velocity and rotation of the part, which can change suddenly by a collision.

To progress the thinking around Groups, which is Boolean algebra applied on dynamic object programming, I represent a challenge which may shed some light on what is good and what is bad use of Groups.

Example: You have a Groups object that contains lot of 3D coordinates, which is stored under Property "Pos". You want to create a new object that represents a square, which need to refer to 4 coordinates indirectly.

  1. Would you create 4 properties, one for each coordinate?
  2. Would you create 1 property, a list that contains the coordinates?
  3. Would you create one object representing a line, and then a square that connect 4 lines?
  4. Would you combine any of the above?

1 is a flat structure, 2 is a hidden structure and 3 is a building-block structure.

If you go for 3, the building-block structure, and you want all squares that does not contain certain lines, what do you do? I am showing here how this could be solved in C#:

var prCertainLines = new EqualToAnyProperty<int[]>(lines,groups,propSquareLine);
var prop = prSquare - prCertainLines;

If you went for the flat structure or for the hidden structure, you would have to solve it differently. It is very common in Groups that better design choices are when you can balance high-level abstraction with not hiding data.

A Property object in Groups is actually a list of numbers that correspond to a bitstream vector. When we got a single object, we can check directly with the object if it has a property or not. Still there might be times when we got the object id, but not the object and want to check if it has a property. For this we can use the following code:

var index = Array.BinarySearch(prop.ToArray(), objId);
return index >= 0 ? index%2==0: ~index%2==1;

This binary search method fits perfectly on bitstream vectors.
I implemented ContainsObjectId, ContainsAnyObjectIds and ContainsAllObjectIds in the Property class. This is interesting, since a bitstream stores only positions where the values changes between true or false. I wonder if this can be used to add or subtract single positions faster from a bitstream.

Today I thought of a way to implement Boolean functions for ranges of number values, but ended up a complete different place. Here is a range from 0 to 9:

int (>= 0)*(< 10)

Here is the same range, except 5:

int (>= 0)*(< 10)-5

The syntax of such functions is written in the Polish prefix notation used in LISP family programming languages. I am new to LISP, therefore I was surprised that people use this notation. It is far more easier to write a Polish prefix expression parser than the standard notation. The reason I want to use this notation is to distinguish Boolean algebra from normal programming. It also makes it far easier to extend the language with new features.

A thought experiment: What if all lists, commands and function were list/properties, where the arguments are metadata to the property function? More, we can use Boolean algebra to combine properties into argument passing. For example, we want to get all customers that live in Kristiansand or does not have registered City:

(get Customer*(City*(= City Kristiansand)+!City))

First let's look at the inner most paranthesis:

(= City Kristiansand)

This is a function that when multiplied with a property reduces the object to those who live in Kristiansand. The second inner most paranthesis contains no function name, and is therefore just a Boolean expression:


The third paranthesis combines a function 'get' with a Boolean expression parsed as parameter.

(get Customer*_)

So why would we treat all lists as properties?

In some programming languages, everything is an object. Now we want to make a language where everything is a property. Any group of data is a collection that shares some properties in the form of Boolean algebra. Therefore a complete group oriented language can do the same job as an object oriented language.

I came up with this formula for simple economics for a money buffer. A money buffer is what companies and households have to secure their well being. The number you get out of this formula tells how many time periods it takes until you are either doubled the buffer, that that you are bankrupt.

x = buffer
y = income
z = costs
x / ( y - z )

For example, if you got 5000$ and earn 2000$ and costs are 1800$ per month, you will double the buffer in 25 months (2 years and 1 month). If you reduce the costs to 1750$ you will double in 20 months.

The question is: Are you willing to live 20 months with that reduction in costs? This is a way to measure the utility of money instead of looking the money as a happiness factor. The next 50$ you cut in costs will have less effect on the time interval to doubling the buffer. Instead of saving you 5 months, it will only save you 3.33 months. It is easier to live without something in 3.33 months instead of 5 months. Instead of cutting big posts that you want to keep, you can look at what is bearable during that period. After you have done one cut, it is easier to do the second, but at some point cuts does not matter anymore. Money will not keep you happy if saving them does not have any effect on your security or pleasure of using them.

In a thought experiment, I think of one employer (E) and one worker (W) in a game of money and loyalty. E represents what the employer thinks the worker deserves. W represents what the worker thinks her or himself deserves. Each can be in 3 possible states, underestimated (-), good estimated (0) and overestimated (+). The combination of E and W correspond to how the worker feels about loyalty to the employer.

W-OKGratefulChanging to W0
W0UnderappreciatedOKChanging to W+

I tried to put in what I think is typical emotional behaviour. What is interesting is what happens when you apply this in more general situations. Humans want other humans to be nice to them, but without costing too much.

Today I read about the Euler Project to see if I could find a problem to test Groups. Instead I discovered that my calculator solves some problems quite well.

The first problem is to add all numbers that are a multiple of 3 or 5 under 1000. Since the numbers that are a multiple of both appear twice, we need to subtract the sum of their combination. This is how the solutions look like:


The answer is 233168. What is interesting is that if you want to find the sum of multiples of 2, 3, 5 it gets more complicated for the t.


Here the answer is 366832. The general rule is that for even number of factors, you have a positive sign. If you tried calculate this for every prime number up to a factor, you would end up with n*(n+1)/2. This is because all positive numbers greater than 1 are products of one or more prime numbers. The number of terms you have to write for N prime numbers:


The last formula is the Nth row in the Pascal's triangle, except a 1 on one side. bin(N) is a vector, for example [1,3,3] that tells you how many there are of each type of term. In the above example, the first position is for (2*3*5) and then you split more until you get 2,3,5 in the end. It corresponds to all possible ways to replace the multiply sign with a split.

There is something called "Pareto principle" which says that often 20% of something contributes with 80% of the results. For example, 20% of the products contribute with 80% of the income to a company. This principle comes in handy for some decision planning.

One interesting thing in physics is the interpretation of force. In old days, when the expression "force" was made, people were restricted only by the way of measuring of time. They figured out that a second is a nice unit, but when it comes to computer simulations of physics, a "second" is not natural.

The way to think of forces in computer simulations is as a coordinate where a particle "should be" according to something the next iteration. This is described as a vector that points from where it is now to the suggest position. When we sum these vectors together, we get the total "should be" that is the force on the particle. From then on, it is the properties of the particle that decide how it will actually move.

a = F / m

The mass of the particle, is nothing but a factor that tells how fast it will move accordingly to other particles. If it moves faster than the suggested vector, the simulation will get unstable. A mass of "1" in natural computer units equals the tipping point to what kind of results you get. This does not mean the same as if there is a speed limit, but it is related. High speed means basically the same problem, that the position where the particle "should be" on next iteration is far away. It leads to the same instability as with particles with mass below "1".

If the computer take smaller steps, it has to calculate a factor f in somewhere. When we have N iterations per display of graphics:

f = 1/sqrt(N)

We can set a maximum velocity of a particle per iteration to guarantee stable simulation.

N = floor( velocity / maxVelocity + X )

At any time, we will have at least X iterations, but if things move faster, the algorithm will put in more iterations. One interesting aspect of this, is that if we divide our simulation world into sections, we can save calculations.

In relativity when we observe a group of particles travelling very fast, it will appear to us as if time slows down. Other particles seems to have little effect on them when exchanging forces. This is nature's way of preserving stable "simulation". Nature can not put in more iterations like in a computer simulation, but it can change the factor which controls how much forces affects a particle: Mass. Forces that work in the direction of where the particle goes, got a larger mass than forces working in the opposite direction. If the particle travels near speed of light, the forces that work in the direction of where it goes got near infinite mass, a = 0.

I am working on a system for handling interruption of work. It is a notepad system, but the technique is designed in a way to let new users take advantage of experience. If you are busy or getting easily distracted or stressed, you might benefit from this technique. The focus is to handle information in a way that you can put some effort in it immediately. Main problem is when you are interrupted in everyday tasks that disturbs your flow. The interruption costs you concentration, but the interruption can also be used to write down the ideas you get in the moment. When you are done with a task, you erase it, to make your text file small as possible and easy to find out where you was. It is a combination of 2 techniques I have worked on earlier and 1 new technique I call "Dreaming".

Dreaming is a technique where you write sentences that start with "I want ...". It is not allowed to write things you have to do but you do not want to do. For example, if your boss wants to do something, you can have it under "I want monthly payment". It is important to focus on positive sides of life and of things that are important to you. The order of the sentences does not matter, because it is you that chooses the priority at any moment. This is a typical "Dreaming":

I want to sky dive
I want to spend more time with my friends
I want to invent something

Qubiting is a technique where you ask questions that can be answered yes or no, but remains unanswered. It is very efficient for brain storming, so I have tried it in software development for a while. The art of qubiting is to see the connections between many questions, in order to get a bigger picture. A typical qubit question can be:

Can I get this door fixed tomorrow?

Researching is to write down information and tasks to do. This is how "Researching" can look like:

A new book costs about 50$
Find out if I have enough money on my bank account
My bank account is empty
Find out if I can borrow from my brother

Now you have learned about the 3 different techniques, you will now learn how to combine them. First, you start out with Dreaming, by writing down things you want. Then you add Qubiting as a new layer under each "Dream". This allows the goal to be broken down into questions that has to be answered, which makes your thought progress measurable. The third thing is to add Researching as a new layer under each qubit question. This allows you to do the tasks and get the information necessary to answer the qubit question. Here is an example:

I want to eat something really nice for dinner today
 Should I order a pizza?
 Should I go out on a restaurant?
 Should I go out on a restaurant with some friends?
 Should I buy some ingredients and make some food?
 Should I make some Taco?
  Find out what I have of ingredients
  I have tomatos and some salad.
  Find out what I need to buy.
 Should I invite somebody home for dinner?
  I was late up last night, I need this evening to relax.

The things you strike through can be erased from the notepad or editor you use. If you examine the example above, you see that Qubiting is what gets you to browse fast through options. Next you have Researching that diggs down into the specific reasons for or against. If you look at everything in a higher perspective, you might find that you have to do things you don't want or did not planned. The important thing is that you try to make a best possible decision, there are many dreams to choose from.

I am currently testing this technique, but I have not found a good name for it yet. One idea I had was to put parts of the names together in a word like "ResQuDream".

In my calculator, I want to make implement a feature to make it more powerful. The idea is that each time I sum up a list, it is appended as a variable. For example, here is how one would write 3D cross product:

a = [ax0, ax1, ...] + [ay0, ay1, ...] + [az0, az1, ...]
b = [bx0, bx1, ...] + [by0, by1, ...] + [bz0, bz1, ...]
c = (a1**b2 ++- a2**b1) + (a2**b0 ++- a0**b2) + (a0**b1 ++- a1**b0)

Yet, I am not sure exactly how to do it. There are also other alternatives to do this, like using '\' as an operator:

a = [ax0, ax1, ...]\0 + [ay0, ay1, ...]\1 + [az0, az1, ...]\2
b = [bx0, bx1, ...]\0 + [by0, by1, ...]\1 + [bz0, bz1, ...]\2
c = (a1**b2 ++- a2**b1)\0 + (a2**b0 ++- a0**b2)\1 + (a0**b1 ++- a1**b0)\2

Maybe this could be stored as one list, using an empty position to separate them:


I want the calculator to work on lists and lists only, not more advanced structures. Perhaps it is better to create an operator that takes out a subset of a list. It could look like x\0:3 to get the four first numbers of the list. Since this is the smallest change I can do and it also can be used for other purposes, I have implemented it.

Now I think I have learned what the cost of using Groups is: Initial footprint in algorithms. You don't think of this when you write object-oriented code, that everything is setup in the compiler. With a language like C# and a framework, it is just to start coding. In Groups, that is "property" or "group" oriented, you can change which properties that are called at run time. This means you have to be careful with the names you give properties, the best is to put them in one class. In Groups, there is no guarantee that a property id (integer) will point to the same. For example, when you read a Groups file a property can have the same name but different id than expected. Luckily this makes it easier to reuse algorithms or modify the document with another tool. It is a bit like XML but more loose structured, more like a graph.

There are applications for SQL and databases, for XML and also for Groups. Commonly used on the web we have JSON, which describe data a lot like how it is written in programming languages. Yet I am still fascinated by Groups, perhaps because of the "I don't care about the content" attidue. The other data formats are pretty strict, because it tells you what datatype you got and then you have to deal with it. In Groups an algorithm or tool can be blessingly unknowing of what other things there might be. This makes it possible to bypass steps and distribute the "know how" across helpers instead of centralizing.

The concept of helper is derived from a mathematical principle: If you collect everything one place and rule like a king, you don't know what is happening in the country. You might ensure that the rules are correct, but that does not help if the rules are not used correctly. However, if you appoint helpers to rule their parts of the kingdom, you can spend your time on long term planning. It is a chance that a helper will do something wrong, but then you can appoint a new one. In a centralized kingdom, if it first begin to rot in the palace, the whole kingdom might fall apart. At the first glance it looks like more job to maintain communication with the helpers, but over time the need for communication gets less and only important issues are brought through.

I remember when I was working for a company that had a large and complex database. Most of my time I worked on how to put the data together in the way I wanted. Some of it is still in Groups, for example a one-to-many relation. Still it has the advantage that you can pass multiple layers of criteria.

When I designed Groups I tried to avoid value based features. Yet I needed a way to tell if an integer, which is used to refer to another object, was null or not. One thing I am tempted to do, is to make a function that looks for double or string values, but if it finds an int value, it looks up in that object using the int as id and returns the value of it's property.

This might introduce a new level of power in Groups. You can make one object refer to a value somewhere else, without the need of introducing relations. If somebody changes the value that is referred to, it updates everywhere. Similarly, one could have a data type that worked like an expression in Elemento. Each time you looked up and took out the values of a property in an array, it could execute the expression. It would be nice if this worked seamlessly with all algorithms.

Still I am in doubt because it might take a lot of work to do this. At least I want to have a separate method, I don't want to slow performance in simpler algorithms.

Today I was working on something I need badly in my toolbox: A general parser kit. A parser is a software that takes an input of characters and builds a structure of understanding from it.

Most computers users don't understand why a good parser is necessary for productivity. That is because they use a visual interface consisting of buttons, lists and workareas. When working with data and difficult work where the future is unpredictable, a tool that interpret commands is useful. The reason is simply you have a keyboard consisting of buttons with printed letters on, and the human brain is sufficient to compose a sequence of letters to do a command if it knows the rules. In a visual workspace, you have no places to store things at random and do operations on a selection. This is what you use the hard drive for: To keep things you need to compute or alter later. For many tasks it is ineffective to give a name for everything, you just want to do it now. This is where parsers come in, because they can interpret a sentence with a specified syntax. When the structure is generated, it is passed along to another algorithm that executes it. The advantage of separating intepretation and the job makes it possible to store that information for later use. Basically, a computer is a tool for storing such commands and execute them.

A good, general parser kit means you can design more tools quickly. These tools can look like small programming languages, designed for a specific task. The two main projects I am working on is the calculator syntax for computing with lists of numbers, and the Groups library to do high level group oriented programming. Groups uses bitstream vectors as Properties and Boolean algebra a lot. It is the ideal solution for storing information from a parser.

I figured out that when parsing a sentence, there is always characters separating the different parts. When I use bitstream vectors to store temporary data under the interpretation, each "chunk" in the bitstream represent a token. There is no words in the dictionary that refer to exactly what I mean about working with "chunks" in this context. If you have a stream of 0s and 1s and there is a chunk in the middle "000001111100000", then I call working with the interval where the value of the bits are 1 for a "nailing".

Nailing is outside standard Boolean algebra and it is outside standard arithmetic. I believe nailing is identical to a concept I made earlier, which is called "li". A li is an undestructible piece of information that connects two points in time. In a similar way, when interpreting a sentence, the smallest piece of information is a li connecting two positions. I think Boolean algebra gets very close, but maybe li and nailing is spot on to the earlier idea. The earlier idea was that the information in nature is built up of li and this is how reality holds together. Since this is a concept that I worked on a long ago, I will leave it for now and focus on the parsing. I think I will use the name "li" to refer to the chucnks of 1s that are connected.

The reason nailing is different from Boolean algebra, is because I am not interested in modified results. If I have a bunch of lis, I want to know who is "touched by" another bunch of lis. This is called "nailing", because you are kind of nailing the lis to the wall and dropping the others. If you have an operator, for example '+' in an expression, you can nail right and left to get variables. To get the variables, you need to have a looked through the expressions for things that looks like variables. The result is stored in a bitstream, which contains 2 numbers for each li. All the detailed information is ready, you only need to find additional information to connected operators and variables. When I think of it, it is not surprising that "li" got something to do with this, since all relations can be constructed from li.

So far I have developed quite sophisticated tools for playing with input, but I have yet to put it all together. For example, what I want to do first is to find all text and use this as filter for the other interpreting. I have some few tests and it works quite well. I can read paranthesis and model a structure from it and distinguish alphanumeric variables and numbers. This was done separately, because I try to get the indivual pieces working and tested before putting it together. The next challenge is functions and lists, after that comes operator precedence. This is a top down approach, which I think is more flexible, but harder to optimize than bottom up. Without using Groups and bitstream properties I would never manage so complex logic. Groups let you do much in few lines of code, so the biggest challenge is to get used to this way of working. One thing that strikes me is with Groups, you often just run the program and it works. Some of the reason might be that it removes the possibility to do common human errors.

Groups is superior to other file formats when it comes to compatibility. If I have a software v2.0 which uses the Groups format, I can open it in v1.0. Even if I make changes in the old version and save it, data from new features might not be lost. This is not easy in formats like XML, because the software has usually no place to contain unused data. In Groups, the data is loosely connected more like in a graph and this gives full freedom to add extra data. If there is a common file format using Groups, you could add extra data to pass between applications.

I mentioned in an earlier note, that I was tempted to implement lookup parent on int types in Groups. What it basically does is to let one objekt inherit a property value from another object. If you have 1000 people in a database and all live in "New York", it might be a good idea to separate 'city' from 'person'. An int takes only 4 bytes, but a text string in this case takes 8 bytes, twice as much space.

A problem that arises, is how Boolean functions might operate on this. I am a bit worried about the fast growing code, which even at this moment is very powerful. I don't want two sets of Boolean functions. I can restrict Boolean functions to use inheritance with same property id. There is no reason to have multiple levels of inheritance, since it can inherit per property. If A inherits from B which inherits from C, then the properties of A inherits from C. By restricting this to one layer I can avoid significant drop in performance.

If you try using inheritance on ints, it will look up all values. Each value in this case refers to an object which contains the int value to use. This means that no int value can be specified for a single object. What a mess.

I have created a function that lets you do inheritance lookup, but not for Boolean functions. You can emulate ordinary relations by handling it explicitly. As long as I don't have any specific application for inheritance, I will not integrate it into the structure. I want Groups to be fast and simple and let the user decide the rules. The idea behind inheritance can be used anyway, when needed.

  1. To see the earlier note, goto 2012050601361222.

I have found a reason to take a second look at "li" or "nail-interval" logic in bitstreams. It turns out, that having some powerful functions in this logic will make parsing much easier. The problem is that a li is not just a single bit, neither is it a property consisting of bit for each object, but it is an object that is represented by a continous range of bits seen from the perspective of a property. In everyday language we have expressions like this:

  1. A garden is a place between two boundaries, containing other objects.
  2. A field is a place covered by grass.
  3. Nobody says that "a place covered by grass is a field unless it is a garden.
  4. We expected to understand when to use garden because of difference in importance.
  5. If we put up a fence in the middle of a field, we might call it a garden.
  6. The field surrounding it is still there, the meaning of a field just can't occupy the same place.
  7. The garden is still covered by grass, so technically it is also a garden.
  8. Because a garden got a richer meaning than a field, the meaning of field is suppressed.
  9. We could say that the richest object dominate others by suppressing them in the range of occupation.

Another example is how we speak of a specific day, like "Friday the 13th" or "Christmas day". All days are weekdays, but when we speak of a special day, we give it more attributes. If somebody says 13. January or 25. December you might forget what is special about them. "April 1st" got all the attributes of a normal day, but it also has attributes which makes it special. Since it got more attributes, we suppress the notion of a normal day. There are thousands of Christmas songs that are made only because our ability to suppress the meaning of the ordinary.

A li represents non-ambigius transformation of information. If I say "I have been in the office from 10 to 11 pm" you assume that I am lying if you observed me elsewhere in the same period. A li is the same as two connected points of time.

This is the thing about li: It is either true, or it is not. We might be able to adjust the information so it matches the facts, but often we do not. Instead we usually attach the surrounding information that does not quite fit in to either part, into the parts which we give most attributes.

For example, things that are special to us contain ordinary attributes that we assign special meaning. "The sunset of Friday 13th" contains a more special meaning than "The sunset of Friday".

Still, the reason we do this is because it is efficient. I want to have more insight into how we can use li in computer programming. Now that I know how li information is suppressed, I can figure out how to interpret human thoughts better. The first thing I notice is that a li is falsifiable, it can be proven to not to be true.

We use other li information to prove, information that have never earlier be shown to be false. Along the way we pick up strange things, but this is because of its properties, for example the concept of infinity. All ussage of infinity can be replaced by "a sufficient large enough number" because it contains the exact same properties. This is what actually happens when infinity is applied in the real world. Because infinity is a special number, it gets more attention and the meaning of "sufficient large numbers" are suppressed.

One cool thing about the word "li" is that if you add an "e" to it, you get "lie".
The general case of problems dealing with li information is to figure out what is a lie and what is a li.
When we work with such data we split the li into different priorities, usually represented in bitstream vectors.
A li is represented in the bitstream vector as two numbers, when it starts and when it stops.
Each bitstream vector represent a type of interpreation, for example "text", "variable" or "number".
A text can contain characters that appear in a variable, so text got higher priority than variable.
If there is a text where these two types overlap, the li of the of the lower priority is suppressed.
If B suppress C, but A suppress B, then the suppression of C disappear unless A also suppress C.
The actual code for doing this is just a few lines, which I have named Property.NailLayers(Property[] props).

The actual challenge is to figure out how to:

  1. Map additional data to the li before nailing.
  2. Extract the data from the nailed layers.

The big challenge is how to make this simple, fast and flexible. Which is something I still need to work on.

  1. If want to learn more about bitstream vectors, I recommend the Boolean Algebra helper.

My thoughts are turning to programming language design, because I try to figure out where Groups is in the landscape. The key problem is how to apply the extra "bit" to a variable that tells you what property it is. In C-like language you have int, byte, float and double and all of them can be converted and assigned to each other. What if the variable name was used to tell which property it was?

int x = 0;
int y = 1;
push(x, y); // Adds a new object that has x and y properties.
prop A = get(X*Y); // Returns all objects that got both x and y properties.
prop B = get(X-Y); // Returns all objects that got x but not y property.

I think I may create a fictional programming language for this idea, which I will call "Wills".

  1. If you want to learn more about Wills, go to the helper 2012051023022422.

Hi! I am the helper for the fictional programming language "Wills". Wills has no classes, it is a group oriented programming language. It is the first pure 5th generation language, attempting to take the next step in programming. With its simple C-like syntax and powerful features, you can do a lot more in less time. The purpose of this language is to explore and understand Groups in more detail.

  1. If you want to learn more about variable declaration in Wills, goto 2012051023084222.
  2. If you want to learn how to create objects in Wills, goto 2012051100072522.
  3. If you want to learn about sublanguages in Wills, goto 2012051100515422.

Hi! I am the helper for understanding how variable declarationin Wills works. Wills supports at it's core 4 types which will be used in the examples:

  1. int
  2. double
  3. string
  4. bool

You can use this types just as you would do in C#:

int i = 10;
double d = 25.5;
string name = "Sven Nilsen";
bool doIt = true;

You can let the compiler detect the type for you, using "var" instead of the type declaration.

var i = 10;
var d = 25.5
var name = "Sven Nilsen";
var doIt = true;

Wills is based on the C# syntax, so translation from Javascript and C# is very easy.

Hi! I am the helper for understanding how objects are declared in Wills. Unlike any other programming language I know of, Wills handles objects differently and in a powerful way.

string firstName, lastName;
push(firstName = "Sven", lastName = "Nilsen");
push(firstName = "Klark", lastName = "Kent");

prop Name = FirstName*LastName;

When you create data, you simply "push it to the cloud" which is global and accessible all places. With the "prop" type, you can retrieve all objects that has the following properties using Boolean algebra. In the example above, the property "Name" will contain all objects that got first name and last name.

Notice that the compiler uses the variable name to assign a property to it when pushing. This means that the programmer can not choose an arbitrary variable name. Each variable has a flag to it which makes it possible for the compiler to push the data into right place.

How do we get values back from a property? If we want a table of all the first names, we can write:

string[] firstNames = getFirstName(Name);
firstNames = text{firstNames ++ "Tommy"};

The above code adds a new name to the array, but the compiler keeps track of which these names refer to behind the scene. When update is called, a new object is created with first name property and the value "Tommy". This is possible because the data types in Wills are dual, they have the value and they have an object id. For each instruction the programmer writes, a dual set of instructions is created to handle the object id. If we want to change a single variable, we can write like this:

int objId = 18;
string firstName = getFirstName(objId)[0];
firstName = "Larry";

This is actually some bad code, because if we don't have any object with id 18, we will get an exception. To check for something we can use the "count" function:

int objId = 18;
int num = count(objId)
if (num == 0) return;
// Do update here.

Hi! I am the helper for understanding sublanguages Wills. In programming, there is no set of notation that satisfy all usages, therefore Wills comes with a feature that let's you write your own custom language. The core sublanguages in Wills is:

  1. calc, for doing numeric calculations on lists of doubles.
  2. text, for doing text processing on lists of strings.

Outside these blocks, Wills will use the typical behavior of C#, except for the dual handling of object ids. Sublanguages will look the same whether you write in C# or any other language implementation of Wills.

Calc is a very compact and powerful syntax for doing numeric calculations. Here is how you can compute the vector length of 3D coordinates:

prop Pos = X*Y*Z;
var xs = getXs(Pos);
var ys = getYs(Pos);
var zs = getZs(Pos);
var lengths = calc{ sqrt(x**x++y**y++z**z) };

Because the compiler handles the object ids behind the scene, it will update the right objects.

The fictional programming language Wills introduces some new ideas when it comes to computation. The first thing is the tracing of object id, which can be easily implemented by replacing basic data types with an extended one. The language also traces property, but this can be extractracted from the variable name. The thing I will focus on here, is the tracing of object ids.

I create a new class for each basic type:

  1. int: PrInt
  2. double: PrDouble
  3. string: PrString
  4. bool: PrBool

Each of these class can be written so they importing code from other projects require little change. If two different object ids are combined in an operation, the object id is destroyed and set to -1. If one of the object ids are -1, the object id of the other is used in the resulting answer. This makes it possible to keep track of where a value came from during calculation. It also makes it possible for methods with access to the Groups to get additional data.

If the calculation returns an array of the same size and same order as array of object ids, you can do the update without need to trace the objects. The same goes for generating a property "A+B", do the calculation and then update at same size. Tracing can be used as a mechanism to detect whether the user does is safe.

What if the user calls a function that updates the Groups before he can do the update? What if we want to use mathematics as a way to filter out specific data we want? The last question is very attractive, because we can convert a property into array of PrDouble, do the calculation on it, convert back to a Property and perform the Boolean algebra as usual on that property. It needs to sort the ids properly before creating the Property object from an array. All object ids with -1 will be ignored.

get Name - [ Height<<20 ]

The get command could accept calculator arguments inside brackets.

I am a person that like to develop methodology that I can remember without having to look it up. When doing some testing of bitstream implementation in C, I wondered how to compare the speed against other things. Bitstreams are frequently used in Groups and other techniques based on it and I want to develop a sense of how fast things might go.

The solution I cam up with, is to combine performance tests with stability tests. 10 seconds is approximately what a developer is willing to wait for results on a performance test. This is why the method I choose is to measure how many 2X loops you can run some code in the interval 7.5 to 15 seconds. If it is below 7.5, you increase X and if it is above 15, you decrease it. The factor you get out is approximate what level of complexity you can use the features that you are running. Therefore I have decided to call this "complexity level". The point is to have a whole number that is easy to remember, I don't need to know what machine Y runs Z operations in W seconds. It also works as a motivational factor if you are close to 7.5 and the code is far from optimized, you see the point of optimizing.

Here are some tests I did in C, trying basic operations (double):

OperationComplexity Level
Now we try the operations on the bitstreams, having two arguments with two numbers each:

int_arr c = opExclude(a,b);

The result is as following:

OperationComplexity Level

Since "sin" is 28, this is 228-25 = 8 times slower. Which is quite remarkable fast. This puts bitstreams algebra in C on the top shelf for recommended query operations. If I initialize the arguments in the loop, it slows down to 24.

int_arr a = IntArray_InitWithValues(2, (int[]){0,10});
int_arr b = IntArray_InitWithValues(2, (int[]){2,7});

int_arr c = opExclude(a,b);


This looks promising.

I measured the complexity level of an empty for loop in C# to be 31. Run this in a virtual machine of Windows XP on a mac mini. The C version running in Xcode in Mac OS X had level 32 for basic arithmetic (except division).

I also found something interesting by testing initialization of the class to parse calculator expressions. It had level 19, but after changing some sorted lists to static I got it up to 25.

Hi! I am the helper for complexity levels overview.
A complexity level is a number that tells you approximately how advanced programs you can make from a building block of code. It is designed to be easy to remember, to check and to obtain new levels from new code.

Basically you run the code 2X times and take the time to see how fast it runs. The complexity level is when you are rounding down to 10 seconds, which means the range is between 7.5 and 15 seconds.

OperationComplexity LevelLanguage
open+close file21C
open+read byte+close file20C
open+read 4 bytes+close file20C

Today I thought on the calculator notion of getting array elements and discovered something remarkable. Let's say we have a list of angles "x" which we want to turn into coordinates on a unit circle:


If we only want to do this with the first angle in the list, then we can write:


Since the first element got index 0.

What if we wanted to construct a list where the cosine and sine terms are alternating? Instead of a list [a0, a1, ..., b0, b1, ...] we want a list that looks like [a0, b0, a1, b1, ...].

To do that we would have to execute the line for each index and put it together in one list. Unless, we could fork the parser state at that point, as we created two identical "agents" but giving them a different number each. You could assign any subset of the program and give it to the agent, and when it's done it delivers the result to put back into your result.

This is a fascinating concept, to duplicate the whole "state" of a program and then run it and see what happens. In an environment of recursivity, the global state might be altered and affect other agents, which is something we want to avoid. If we could get all global variables written to in a region of code, then we could duplicate that part and link to the agent. We could duplicate the whole program, but that would be a lot of wasted memory.

From a theoretical view something bad might happen. If the exact same agent with same code and variable and number is created twice, then we will have an infinite loop. This is best illustrated through a time travelling example:

If I go back in time and kill myself, it could be only somebody that looked like me but was not really me. You see, the information I take with me back in time, my consciousness, means that I don't share the same state, but a copy. If it was the exact same state I would experience the world as my old self. However, if I can get to the exact same state as before, the same thing would happen again and again. It would be kind of creepy, since every peace of me would have to be reused in the environment and put me back again. By physics laws that would mean everying is me only viewed from different times.

Anothing problem related to this is the Halting problem, which I am unsure whether anybody would agree with me on following. If I wanted to make a software that tells whether an algorithm will execute forever or not, I would start with a modified version of the algorithm. Each place where the algorithm can return back to the same state, I would put in a check before doing anything:

20: for (int i = 0; i < 10; i++) -> for (int i; check(20) && (i < 10); i++)

164: while (true) -> while (check(164) && (true))

18: function f(x) {f(x+1); if (x > 100) return;} -> function f(x) {check(18); f(x+1); if (x > 100) return;}

When check() is executed, the state of the memory and line number is compared to earlier registered states. It returns true if everything is OK but if it's false, it stops the entire program. If it comes in the exact same state twice, the program will run forever (assuming that all operations are deterministic). To make this algorithm fail, you would need to have infinite memory and expand it forever.

Let's see how this performs in a typical example used to prove the Halting decidability I found on internet:

public static void Main(string[] args){
 string filename = Console.ReadLine(); //read in file to run from user
 if(DeterminesHalt(filename, args))

The DeterminesHalt program would look like this:

1: public static void Main(string[] args){
2:  string filename = Console.ReadLine(); //read in file to run from user
3:  if(check(3) && DeterminesHalt(filename, args))
4:   for(;check(4););
5:  else
6:   return;
7: }

Since there are a "filename" variable read in for call, then you need to expand memory to store the new variables. This would require infinite memory, but if you consider the memory of each program separate then it would stop when same filename is given twice. For separate memory, each alternating call to run the modified program will detect yes, no, yes, no, yes, no, ... But to make it continue you would have to feed the same code again and again which tells us that you are solving for a sub-program, not the whole. A program in this context does not have a defined scope as long as you expand it with new input.

Today a great milestone was achieved: Groups on direct memory in C. I added some typedefs for string, bool, byte and other things that make the code easier to read. Here is an example:

byte* g = malloc(1);

property sven = opNewMember(g, sizeof(Person));
Person svenInfo = Person_Info("Sven", "Nilsen", 180);
opSetData(g, sven, (byte*)&svenInfo);

property carl = opNewMember(g, sizeof(Person));
Person carlInfo = Person_Info("Carl", "Traffel", 210);
opSetData(g, carl, (byte*)&carlInfo);

property people = opOr(sven, carl);
opForEach(g, people, sizeof(Person), Person_Print);

opForEach(g, people, sizeof(Person), Person_FreeData);
opFree(g, people, sizeof(Person));

"Person" is a struct with pointers to string, which needs to be freed using Person_FreeData. The central thing here is "opOr", which merges the memory of two persons into one. The C environment puts in extra space in memory which destroy the benefit of bitstreams, but this can be easy fixed by copying data into new memory and free the old. It is a lot you can do to make this more applicable for real software, but the basic OR, AND, EXCEPT and NOT is done.

I can easily make the code loook nicer:

groups g = Groups_Init();

property sven = Person_New(g, "Sven", "Nilsen", 180);
property carl = Person_New(g, "Carl", "Spitzwegs", 210);

property people = opOr(sven, carl);
People_Print(g, people);
People_Free(g, people);


Notice that you can free a property containing data from a combination of other properties, and this makes it unecessary to free data the same way it was created.

There is a lot of stuff you can do with Groups, once I have the skeleton then development will speeding up.

Hi! I am the helper for downloading Groups in C. This library contains tools for working on bitstream algebra like in Venn diagrams, to use in C++ and Objective-C. Bitstreams are used in many applications for keeping track of data or problem solving.

To get the last version, visit Groups in C hosted on GitHub.

I have an idea of a model for AI programming. The model is simple:

RealiosIt's task is to take two states and give following result, comparing which of the states that is closest to a goal.
  • If A < B and B possibly follows A, return -1
  • If A > B and A possibly follows B, return 1
  • If A = B or the order is both possible or unknown, return 0
  • If A is true and B must be false, return 2
The range -1 to 1 makes it compatible with sorting and 2 tells 'B does not belong in same list as A'.
Imagios It's task is to produce fictional states which it believes would serve the purpose of the mind, but it is not able to tell if the state is possible or not.
  • Contains one state about the moment and picks one state it want to go.
  • Sends both states to Realios which answers back with a number from -1 to 2.
  • The number tells if Imagios can alter the state of the moment or need another goal.

The separation between Imagios and Realios allows a human to take part into one of them. For example, in chess you might think of a state which you would like to go, and Realios would tell you if it thinks it works or not. On the other hand, if Imagios draws you a picture you can say if you like it or not, and it would continue trying to make pictures you like.

AI algorithms is about finding which way to go within a limited amount of time. If there is a huge amount of options and the goal is achievable it is not necessary to spend much time on decision. You just pick a goal and try to achieve it. If it doesn't work, you pick another goal or try to complete as many sub goals as possible.

It is much simpler to program Realios for board games than Imagios. For example, in chess there are bricks, when removed from the game, can not come back. By checking these bricks, you can determine if one state is before or after another. You can have multiple Realios, for example one for figuring out if you are closer to chess mate, another to check the rules and a third for obvious stupid things to do.

Note that the states of one game of chess should be compatible with sorting and binary search. This allows Imagios to select a goal from memory by similarity or advanced search techniques. Then it asks Realios if it is possible to achieve the goal. Imagios can also use the information from Realios to traverse a tree and think ahead while playing.

The real trick here is Realios, that produce general answers for many types of problems. I got this idea by thinking of a search tree as a sorted list that splits into exclusive directions. Some times there are multiple ways to the same goal, but it's the task of Imagios to pick the shortest first.

Did the first update by a bitstream today for Groups in C. It is really, really, really fast. Using the foreach macro to make the code look nicer, but it still gets huge.

Did a little experiment to find bitspace for legal Tic-Tac-Toe winning states. Here it is, starting at 0, read from left and right and top to bottom. The winning states are those with 3 at the same row, adding together bits shifted by the cell position.

  _ _ _        
 |_|_|_| T I C 
 |_|_|_| T A C 
 |_|_|_| T O E 
7,8 73,74 56,57 84,85 273,274 292,293 448,449

You can use this as a bitstream to determine when a next move will be a winning move for an AI.

  1. If you need a library to calculate with bitstreams, go to 2012052106294622.

Yesterday I came over a way to separate geometry from the graphical representation which I find curious. I am working on OpenGL now, which provides a lot more flexibility than the earlier techniques I have used before. The idea is to reuse the same code or principle for both animation and drawing based on motion along the path. I am not sure if it is the right way to approach, but some parts of it is appealing. For example, you can draw circles using a simple "while" loop:

double dx, dy;
while (move_Circle(m, radius, &dx, &dy))
  glVertex2d(x+dx, y+dy);

What 'move_Circle' does here, is to take a pointer to a 'move' structure and additional input and outputs. A 'move' structure contains of a 'time' and 'step' variable and works like a 'counter' along the path. When the 'move' counter moves beyond 1.0, it will return false and break the loop.

There is a wide area of possibilities here, but the example above shows the basic idea. Separate the motion along the path from the code that draws it, such that the motion can be reused elsewhere.

Not long ago, I did not knew about the gaining popularity of functional programming. It pleases me that people discover this, because I have independently created a system which performs optimizations in "functional programming" fashion. Yet I hear some people say "you can't make games in functional programming" and stuff like that. Perhaps you would like to be interested in how to think about functional programming in a bigger perspective?

The system I made/use/sell is used in animation software and 2D simulation to compute various expressions in an optimized way. It is based on the following philosophy:

You likely don't see the connection to functional programming, therefore I will try to define it explicitly:

Functional programming consists of a set of laws that are applied to a set of data. Once they are set, they can not be changed as long the program is running. In real life we consider "biking" a state even it involves a lot of complex interactions, but because the state is stable and repeats itself, we think of it as a state.

A state is an abstraction of transition between many complex functions, likewise, a function is an abstraction of transition between many complex states. It is no such thing as "pure" and "impure" in the real world since everything is built up of something smaller. In the mathematical world, you can imagine that a surface is perfect, and this is the same reason "pure" is used about functional programming.

In the practical world, a computer algorithm is "pure" if it is faster than the change of input, it means there is no risk that a change will go unnoticed. If the input changes faster, the algorithm will no longer be accurate and it looses it's mathematical "purity".

Today's challenge is about processing lot of data. For some problems, there is not possible to create an algorithm fast enough on a single CPU or thread. It may be possible to distribute the work, but not all cases can be solved in the same way. Functional programming is a way to let the computer do a smart analyze of the functions and put them into different categories. Some functions can be recursive because they always produce a result closer to the correct state. Others return new values all the time, like 'time' or 'random'. All this is used to decide in which order and which CPU to execute the instruction.

I discovered today that the NOT operator in Boolean algebra got interesting attributes.
First rule is that you have to invert at some location, which is usually by adding !0.
The second rule is that inverting twice returns the same result you started with.
This also works for empty lists:

[]!0 = [0]
[0]!0 = []

Another example is at the set [1]:

[1]!0 = [0,1]
[0,1]!0 = [1]

It reminds me a bit about the relationship between 0 and 1 in binary logic.
The difference here is that I got two couples instead of one.
Another interesting thing is that I can use the NOT operator to insert or remove into list:

[]!1!2!3 = [1,2,3]

I can also remove an item from the list by using the NOT operator twice:

[]!1!2!3!2 = [1,3]

I am able to use the NOT operator as constructor of ordered lists.

One thing I have been wondering about, is whether it is possible to expand NOT to lists.
If I specify a vector, I want to be able to invert at both locations.

[]!1!2 = [1,2]
[]![1,2] = [1,2]

I found a way to do this, now the NOT operator can join two lists and sort them.
Also changed the precedence so I can write:


The first one removes 5 from the sequence, while the second removes at index 4.
They both return 1,2,3,4,6,7,8.

Now I think I can integrate EXCEPT with the traditional C-style notation:

A except B = A _ B

The choice of operator is naturally because "-" and "_" are at same location at the keyboard.
One benefit of having the traditional C-style notation is "xor":

A ^ B = ( A _ B ) | ( B _ A )

The "_" operator is evaluated after "|", so the following is true:

A | B _ C | D = ( A | B ) _ ( C | D )

The most common way of using this operator is to place it to the right:

A & B | C _ D

Generally, we can think of an expression divided into two parts,
one for the rule and one for the exception.

rule _ exception

It has been a while since I wrote for this library.
I feel like needing some time alone with my thoughts.
When I think about life, it feels so temporarily.

It would be cool if there was an eternal place to go to.
Would life be worth living, the same over and over,
if you lost all your memories?
It is so strange to be, while the world around you does not give a shit.

One thing I have been thinking about, is the relationship between a state and a person.
I do not intend to become a political philosopher.
Yet, living in Norway, with such a great contrast to the world,
makes me wonder about the nature of the world not giving a shit.
Say for instance, that all countries in the world became like Norway.
If you had a disorder or got burned out, you would still get food.
A state is not a person, so by definition, a state does not give a shit.
It is only the mental projection of what people think is the state that matters.

I think that among the things we create to support us,
a state is the one imaginary thing that can be thought of as 'real'.
It is real in the sense that it lasts the entire life of most humans.
If something is true for the whole context, then it is equivalent to true.

The same could be said about the human consciousness.
When you sleep, you are not aware of your surroundings.
You are also not aware that you are not aware.
Sometimes we can be aware of ourself dreaming.
When we are awake, we are aware of it through the surroundings.
I have learned to recognize and expect my own presence.

I am pretty sure that truth does not give a shit about humans.

On the other hand, there is an optimistic view about things.
If we can make up stuff, like states, it becomes real.
Maybe we will be able to live in a digital environment some day.

A strange thing is that everything we know about the world is just a map.
I have no idea what this could mean in terms of computer programs.
Can a computer program have a model of another computer program?
It is still relying on the underlying capability to compute.
I can only have my own thoughts, because they depend on my underlying nature.

Bill Gates do not want to die.
Somebody said that he would be the first person to choose.
This is very interesting, because it is not about death.
It is about what determine the rights to do what in modern society.

Having lot of money means you get the 'nice' stuff.
I have been a lot in touch with wealthy people.
I do not think 'nice' stuff to them means 'nice' stuff to other people.
What they mean about 'nice' is objects that bring them status.
Other people means things that bring more comfort are 'nice'.

I do not think a rich person got any more rights than I do.
Being rich is just dealing more with things you can not control.
Everybody can critisize them, media makes a game out of it.
Because of this, they easily despite other people as of lower value.

I care more about my life than I care about money.
A human have limits, the more stress, the shorter life.
There is nothing more comfortable than reading a book with a cup of hot chocolate.
Still I find myself exploring my thoughts rather being comfortable.

Maybe it is when I feel insecure about myself, I do not want to feel comfortable.
There are a lot of people that show marvelous talent in making me doing so.
Being insecure means there is something to be known that I do not know.
By reminding me of this, other people can make me feel insecure.
Still, there is a lot I know that they do not know.
I try to treat people better the way I want to be treated.
So far, it has not worked, but I could use more practice.

For example, programming is second-hand nature to me.
I the virtual world, I can build things that were impossible when I got born.
Something I want to learn, is building web apps.
I have spent much time on desktop apps, I think I can translate these skills to the browser.

A web app is very different from desktop apps, in terms of data.
The data has to be compressed or simplified.
I like the idea of having one place where all updates are made.
It also makes it impossible to pirate the software.
I think that making web apps is the safest route to making money.
It can be integrated with Facebook and Twitter for easy login.

The experience with o-clock makes me want to build something more.
I am sure that there are a lot web of services for everything.
Maybe I can make something people are willing to buy.

The main problems I see is:
1. How to store data.
2. How to authenticate users.
3. How to make it highly responsive.

One idea is to only authenticate users on the server side.
The data could travel with the user, by using the url.
The limit here is 2000 characters.
In o-clock, this correspond to almost a year of a 4-word activity.

It seems that I could make something in PHP, Heroku, or in Go at Google App Engine.
I worry about the pricing, though.

Once again, I come back here seek questions where there is only answers to be found.

Why is it hard to be treated as an equal among equals?
Does it cost money? I do not think so.

What feels natural for a human is to act the way one sees the world.
We are told what is right, but forget that we are told.
Sometimes what we are told is wrong and it is staring you in the face.
Why can not everybody see that?

I remember my fascination for animation as a kid.
It seems this fascination never wear off.
I was not interested in being put somewhere and told what to do.
I like to create stuff, because it is a small world I control.

People want to feel different from other people, but at the same time,
they accept to be _told_ they are different.

To feel special people think they need to be good at something.
The only thing you need to be special is putting your mind into it.

Why are people fighting wars among themselves?
The longer you think about it, the less likely you will see war as logical.
There are great wars and small wars, which spring out of uncertainty and conflict.

We fight wars because we have no plan.
Putting your mind into it is what brings best progress.

I have an idea about a game.
It is playing as a snake that eats sheeps and soldiers with castles and forests.
What I wanted to bring, is the game mechanics of an old game into a new environment.

There is no apparant reward for making a such game.
Which is why the idea appeals to me.
Making something, with simple tools, and watch it come alive is fascinating.

I do not want to live inside other people's little bubbles.
When I see things, I get inspired and want to explore it.

I think that if I start living a boring life, I will become boring.
My mind will automatically rationalize boring and meaningless actions.

Instead, I want to expand my mind.
There is no reason to do anything, if one does not put the mind into it.

At some point, it is a question about taste.
I respect other people's taste.
Still, it is difficult to show that respect.

I do not want to tell other people what is right.
It simply can not be experienced that way.

Maybe 'right' is something one can only reason about.
To tell something is right, is like learning a blind colors.

Maybe I should stop caring about how people interpret me completely.
I am very glad I have this page, where no one except me can decide its content.

Perfection is subjective.

Nothing about you is perfect, not when you zoom into it.
At the same time, it is still amazing that it works the way it does.

I do not have one large goal in life.
As long as I am doing it, then I am happy.

Inspiration is almost like being told, but it is more subtle and unintentional.
Random stuff is cool, because it is impossible to control it.

Ironically, the fact that things are indeterministic makes it impossible to prevent
bad things from happening.

I learned to create an encrypted disc image in OSX recently.
You can mount it, type a password and use it as a normal folder.

It can all be done with the Disc Utility.

File -> New -> Disc Image From Folder

One has to select "Read/Write", and I recommend 256 bit encryption.
With todays technology, it will be very difficult to crack it.

The 'Resize Image' function can be used to allocate greater space as it grows.
However, this will not make it larger when you open it.
For that, one needs to run the following command in the Terminal:

diskutil resizeVolume /Volumes/myfolder 1g

The command above makes it 1g large.

What fascinates me is that I am starting to thinking of encryption in a different way.
Before, I thought that it was a last barrier against attacks.
Now, I think of it as a very large wall of protection.

For example, one can upload an encrypted disc image to a web server.
Even people get access to the file, they can not open it.
Since it mounts as a normal disc, it is easy to create programs for it.

This makes it very interesting when one thinks of the internet.
The internet makes things accessible, but with encryption one can make one thing
accessible from everywhere, but only for oneself.

I like that encryption is getting easier to use.
For source code, I wonder if Github is really necessary when there is only me reading it.
The version history is stored within Git the disc image itself.
Still, one can use Github for the projects I want to share.

I would like to have one huge file that I upload at the end of the day.
This makes it very easy to switch from one computer to another.

The universe is a huge place, and old.
Compared to the age of the universe, humans have not existed for very long.

This perspective makes questions about close to eternal truth interesting.
What can be said about truth in general?

With computers, we can make almost anything we want.
Some things are just too complex to be put into words.

If we want to explore the space of close to eternal truth,
we can create a world to demonstrate the aspects to communicate.

Because life is not really a huge part of endless possibilities,
this means a world involving humans and animals as we know them,
can not be a huge significant factor of those worlds.

I think that one ends up with more mathematical view of the world.
Because mathematics is all about truths.

Just looking at some random colors on a screen is far from "truth".
One wants to create something that has clear causes and effects.
In that space we want to explore something interesting.

Today my brother was playing an online multiplayer game,
which has around 50 heroes which one can select 5.

I was thinking of a way to select the best team based on statistics,
assuming that one side picks their team first.

My idea is trying to minimize the risk of loss by brute force.
First, one has to collect data about thousands of games.
Second, one has to calculate the probability of winning (w_xy) of
player Y against player X.

w = P(win) = wins / (wins + loss)

For confidence, I thought picking a value between 0.5 (0 confidence) and w.
If the computer wants to try new stuff, it can swap 0.5 with 1 to "try new things".
If the computer wants to play safe, it can swap 0.5 with 0 to "rely on experience".
The resulting probability can we call "p".

Then, for any selected team, we can calculate the probabilty that no hero manages
to win against any of the others on the other team.
The smaller this probabilty is, the higher chance is to win.

(1-p00)(1-p01)(1-p02)(1-p03)(1-p04) *
(1-p10)(1-p11)(1-p12)(1-p13)(1-p14) *
(1-p20)(1-p21)(1-p22)(1-p23)(1-p24) *
(1-p30)(1-p31)(1-p32)(1-p33)(1-p34) *

This becomes a really small number, so it is better to use logarithms:

ln(1-p00)+ln(1-p01)+ln(1-p02)+ln(1-p03)+ln(1-p04) +
ln(1-p10)+ln(1-p11)+ln(1-p12)+ln(1-p13)+ln(1-p14) +
ln(1-p20)+ln(1-p21)+ln(1-p22)+ln(1-p23)+ln(1-p24) +
ln(1-p30)+ln(1-p31)+ln(1-p32)+ln(1-p33)+ln(1-p34) +

The combination that gives the smallest number, is the "best" team.
So, how many combinations are there?

Assuming that we can pick as many we want of the same kind,
and the order does not matter, we have:


This is called "unordered selection with replacement".
In this case, with 50 heroes which we can select 5:


This is piece of cake for a computer.
It could precompute for every possible opponent.
I think it would be interesting to know if any "dream team" would appear.
A "dream team" is one that appear most frequently.

Maybe this technique can be used to evaluate the game balance of any game.

You have a large text and some keywords, but want to find the place
in the text that contains all the keywords within smallest area.

For example, if there is a page where all keywords are mentioned,
it counts as more significant than the same keywords spread over multiple pages.

The idea I had, is to use a queue to enqueue new keywords into,
and if the first keyword equals the new keyword, it is dequeued.

Whenever the queue contains all keywords, the position is stores in another list.
The list is sorted after the size of the queue.

The first item in the sorted list should be the most significant place.

This search technique might improve the search of text editors and browsers.

Some choices are so complex that it is difficult to be sure of any decision.
For example, what game engine to use in a game, or which platform to develop an app.
I want a method that helps me to make better decisions, but "delays" the answer as long as possible.
By delaying the answer, one can reuse the data by adjusting the parameters.

I have 3 alternative platforms to develop an app:

A = Apple iOS
B = browser
C = cross platform desktop

Each of these are compared by the following attributes:

a = availibility = Easy to install and works on many devices
s = security = Difficult to pirate
p = productivity = Easy to read, save, import and export data

Then I start out comparing all options against each other.

2 points = better
1 point = equal
0 points = worse or same

I use a comparison matrix where each row sums the points.




Then I create another matrix with the points per attribute:


Now, the question is, how much does one value each attribute?
I think security and productivity is twice as important for the product
I have in mind.
I can use the calculator notion to compute the value for each:

[2,4,1]*x = 12
[4,0,1]*x = 6
[0,2,4]*x = 12

One thing that bother me, is how complex all software grows over time.
If you take a look at movies, you see them selling the first together with the sequel.
In the software world, we try to force the users to use the latest version.

What if we created minimalistic applications and then made "sequels"?
Instead of forcing users on to the latest version, one could sell the old version cheaper.
Is there a mathematical formula that one could use to set the prizes?

If we start with the lowest price and each sequel increases with factor of the golden ratio, we could have prices:

Basic: 10 USD
Standard: 16 USD
Professional: 26 USD
Ultra: 42 USD

When two persons receive the same number of points in Olympics, they end up in the same place.
There can be two 1st, two 2nd, two 3rd, three 1st etc. and any combination of these.

For example, 3 participants can be ordered the following ways, where the leftmost is first place:


For x number of people, we can calculate the number of possible combinations:

f(x) = 2^(x-1)

I calculated manually for 2-7 people here

I have been thinking about a way to model foreign key relationship.
The first axiom is to always put new data at end of table.
Older data will move toward the beginning of table when it gets defragmented.
Each row has a unique identifier is ordered.

The foreign keys have two values, one for the identifier and one
which I call "last look up position" (llup).
When we look up the row by a foreign key, we search backwards in the table for it.
When llup is -1, we start searching for the key from the end of the table.
If we find a row with a lower unique identifier, it means the row has been deleted.
In that case we set the llup to -2.
If we find the correct unique identifier, we update the llup.

Look up is O(1) when there is no defragmentation process.
The best thing about it is having a unique identifier that does not change.
When lots of data gets deleted and defragmented, it will create some extra work.
However, this happens only the first type when looking up and gets faster afterwards.

I have a feeling of not understanding something.
It is probably that I do not understand a lot of things.
Maybe there is a bunch of techniques that together becomes powerful.

However, I have a gut feeling of not understanding something big.
It is a wonderful feeling to live with this sensation.

One technique I want to refresh, is how to model things with relations.
For example, I have the following problem:

A high level action consists of smaller tasks.
It is bad design to use sub-actions, because they are sensitive to changes.
If you want to navigate a map, it is not enough with a list of steps.
A high level action should be composed of smaller intelligent decisions.
The required choice should depend on the state in that moment.

So, if we have a list of goals and a list of states,
we can create a table of choices.
A such table depends on the role we want to describe.

AGRESSIVENavigation ObjectiveEnemy Distant ObjectiveEnemy Close Objective
Land, On Feetrunshootknife, choke
Land, In Vehicledriveget closerun over, exit
Deep Water, On Feetswimget closeshoot, knife, drown
Deep Water, In Boatdriveget closeshoot, jump
Air, On Feetparachuteland safelyshoot
Air, In Planedrivebombshoot

Each actor is driven by objectives.
All objectives that have overlapping actions are exclusive.
Objectives that are not overlapping work together.
If one of the objectives that work together fails, it terminates the other objectives.
If you swim and can not get close you can change to a "problem solving state".
The "problem solving state" is an objective to attempts to put you into another state.
It tries to find the best actions that can lead to a beneficial objective.

Any group of exclusive objectives can be matched with any other objective.
Sometimes lacks the equipment or access required for navigation.
In these situations, the objective can change to "Panic Objective".
Changing objective can be an action in itself.

One pattern that keeps appearing is the group against any other group behavior.
It is like saying "these people are on the same team, therefore not on the others team".
This can be modelled with something simple as a group id.
One thing to consider is a neutral part, which two groups fighting each other leave alone.

A group id could consist of factors with prime numbers as bases.
A neutral group can have the factor of each group it is neutral with.
Group A can have factor 2, group B can have factor 3.
Group C have factor a multiple of 6, which contains both 2 and 3.
If C is 2*3*5 and D is 2*3*7, they fight each other.
The reason for this is logical, because 2*3*5 will fight 7,
so anybody that support 7 is their enemy.

Interestingly, groups that fight each other can have a lot in common.
For example, with 2, 3, 5, 7 we can construct 4 groups that share 2 factors:


A more complex scenario is when we try to fit as many relations are possible with 64 bit.
We can search through connections by factorizing their number to expand the reach.
The product of the 4 last primes of the 1000 first primes fits within 52 bit.
If all are connected and no one are connected in circles, we have maximum depth 5.
We can simply loop through the table of primes to factorize.

The problem is that to store the address of 1000 items, we need only 10 bits.
4 such addresses means 40 bits, which is much less than 52 bits for primes.
One advantage might be to check for a relation very fast.

A social graph represents how people are connected and which way they are.
It is interesting to think in this direction, because it is about dealing with complexity.
I want to create stories that are based on belief and interacting with a social graph.
To make this I have to generate one.
I will focus on two things:

  1. Which people know about each other
  2. What people believe about each other

One idea is to generate the social graph based on traits of personality.
The more two people have in common, the likely they are to know each other.
I assume there is a kind of probability mechanism that connect them.
Very similar to how a maze is generated until all are connected.

A trait of personality is divided into groups.
The traits are exclusive to the ones in the group.
If one trait from a group is picked, it can not pick another from same group.
A trait can belong to multiple groups which it is exclusive.

For example "live in space" is exclusive to "live in the mountains" and
"live in Europa.", while "live in the mountains" and "live in Europa" are not.
When we pick a trait, we go through the list of group ids to check for those we have.

When we design a such system, it is important to be able to create new groups.
Often the groups emerge from thinking of new traits.
The traits are often taxonomic by nature, we choose their relations based on experience.
The system will run according to our assumptions.

If we generated an interface based on groups,
we could have nested combo boxes.

There is a similar system that needs this kind of system: Collision detection.
Each object has a group of other objects it does not need detection.
We can take the group of all objects and subtract those who are exclusive.

To make this efficient, we could arrange objects by groups they belong to.
This can be solved by sorting the objects by group id.
If one object belongs to two groups, we could put it on the end of the first,
and at the beginning of the second.

However, this seems rather complicated, so I do not want to pursuit this idea.

One example of exclusive groups is weapon equipment in games.
Each one-hand weapon is exclusive to other one-hand weapons for a hand.
Two-hand weapons are exclusive to all one-hand weapons for both hands.
Two-hand weapons are also exclusive to other two-hand weapons.

A one-hand weapon can be chosen either for left or right hand.
Each hand is a "picker" that chooses objects from two groups.
These two groups are one-hand weapons and two-hand weapons.
A two-hand weapon is picked by merging the groups of two hands.

This means a "picker" can be composed of smaller "pickers".
It would be nice to have a way to control this behavior.

One idea is to give specific information like hand attached to the weapon.
Then one can have one "picker" for both hands.
If one picks for left hand, the righ hand items could be excluded.

If we generated an interface based on groups,
we could have nested combo boxes.
In the first combo box, we can select between "two handed weapons" and
"one handed weapons".
If we select the first, only one new combo box appears.
If we select the second, we get two new combo boxes.

I am trying to learn GLSL and the learning curve is quite step.
Starting to getting unpatient because I need to learn all this low level stuff.
For example, if you take the most basic shaders possible:

attribute vec4 position;
attribute vec4 color;

varying vec4 fragColor;

void main()
  gl_Position = position;

  fragColor = color;

The 'attribute' keyword specify data that is fed to the graphics pipeline.
In my program, I have one array for each attribute.
Since I am programming in C#, I use an array of float values.
A 'vec4' data type consists of 4 float values.
In my position array, I only have 2 values per coordinate.
This is converted to vec4 when I specify '2' as second argument
to the function 'GL.VertexAttribPointer<float>'.
The vec4 data type have members 'xyzw'.
0 is inserted in the z component.
1 is inserted in the w component.
The reason 1 is inserted in the w component is mathematical convenience.
For example, adding 1 can make ray intersection calculations easier.

What makes me cringe a bit is the use of same type for position and color.
However, when I think of it, I can use this to your benefit.
The vec4 data type has analogy of 'xyzw' for colors 'rgba'.
When you have a color with 3 components,
you want to fill in 1 the alpha component.
This is handled automatically when I specify '3' as second argument
to the function 'GL.VertexAttribPointer<byte>'.
Colors can be stored in byte arrays to use less space.

Being able to reuse the same shader for different input is nice.
I have a feeling that data is converted on the fly,
right before it is processed and not before it is stored.
What it means is that if you have 1 GB GPU memory,
you can put the corresponding 1.25 GB of colors on it.

The 'varying' keyword means a value that is interpolated
in the fragment shader.
The fragment shader contains potential pixels to be rendered.
The vertex shader processes per vertex,
so 'varying' is a way to tell GLSL that 'I want this to be interpolated'.
You can not control the way a value is interpolated.
The reason for this is that the GPU might insert new vertices
because of clipping against the frustrum.
The frustrum is the description of the viewer perspective.
Basically I can control the world projection of geometry
in the program, but GLSL handles the view projection.

Now let us take a look at the fragment shader:

varying lowp vec4 fragColor;

void main()
  gl_FragColor = fragColor;

We see the same 'fragColor' parameter,
but now it is interpolated to match the pixel.
By splitting into vertex and fragment shader,
we can let GLSL handle some of the optimization techniques.
The 'lowp' keyword is used to tell GLSL that we want
precision in the range 0 to 1, which equals the color range.

My goal is to create a vertex shader
that takes position and color per vertex
and transforms the position through a uniform matrix.
This shader can be used on all rigid geometry that has no texture.
There is not much use of a shader that can not transform geometry.
A uniform matrix 4x4 is of type 'mat4' in GLSL.

Mathematics for 3D can be quite intimidating.
Luckily OpenTK gives us what we need.
There is a data type 'OpenTK.Matrix4' that we can use.
It even supports operator overloading,
so I can use '*' to combine matrices:

Matrix4 m_transform = Matrix4.CreateRotationX (1.0f) * Matrix4.CreateRotationY (1.0f);

The code for using a program and setting the transform parameter is easy:

GL.UseProgram (m_position_color_program);
GL.UniformMatrix4 (m_uniform_transform, false, ref m_transform);

The second parameter to 'GL.UniformMatrix4' tells GLSL
whether to interpret the matrix as transposed or not.
GLSL stores matrices as a list of coumn vectors.
If you specify a matrix as an array of floats,
you have to write it in transposed form,
or you can set the 2nd parameter to 'true'.
This will make OpenGL do the converting for you.

Link to vertex shader
Link to fragment shader

My goal is to set an ortographic view with OpenGL
that has -0.5 at bottom of screen and 0.5 at top.
The aspect ratio should be 1.0.
The left and right side should adapt to the height and aspect ratio.

First I thought 'GL.Viewport' should do the trick.
However, this function sets the render target rectangle
inside the control or window you render to.
OpenGL fills the control by standard which is just fine.
However, now I know how to create a split screen.
One can use 'GL.Viewport' and render graphics twice.

I skimmed through the list of function on the 'GL' object.
Could not find another function to adjust the aspect ratio.
It seems that I have to compute the view matrix and pass it as uniform variable.
I can just manipulate the transform matrix.
The 'Matrix4.CreateOrthographicOffCenter' seems to solve it.
My problem is then to find the correct coordinates for left and right.
The left side is symmetric to the right side,
so I need only to find one number.

Then I discovered an interesting problem:
When you rotate the view left or right,
the OpenGL context also rotates, but not the view.
To solve this problem, I have to know the orientation of the screen.
The following code find the aspect ratio independent of orientation:

public float OrientationIndependentRatio () {
  var orientation = UIDevice.CurrentDevice.Orientation;
  bool portrait = orientation == UIDeviceOrientation.Portrait ||
    orientation == UIDeviceOrientation.PortraitUpsideDown;
  float w = portrait ? this.Size.Width : this.Size.Height;
  float h = portrait ? this.Size.Height : this.Size.Width;
  return h / w;

But then I discovered a bug:
When the app starts in portrait mode,
the view matrix is calculated wrongly,
while it appears correct when you turn left or right,
or if you start up in landscape.
It seems to be a difference between device orientation
and interface orientation.
The template project I use for Monotouch adds code to a view
inheriting from iPhoneOSGameView.
I have not found any 'InterfaceOrientation' property here.
So I had to write a custom property and send it from the ViewController.

At least I ended up with a function
that computes orientation independent ratio:

I have a list of length N where I remove every N-1 item
but the list repeats forever those who are not removed,
until there is no items left to repeat.
For example, if I have "1,2,3,4" I remove 4 and then I have "1,2,3,1".
Each time I remove the last item.
Here is the algorithm:

public static void PrintN (int n) {
  var list = new List (n);
  for (int i = 1; i <= n; i++) {
    list.Add (i);

  while (list.Count > 0) {
    int j = (n - 1) % list.Count;
    Console.Write (list [j] + ",");
    list.RemoveAt (j);

  Console.WriteLine ("");

When printing out lists for different N,
I notice that all odd numbers less than N gets printed first.


This is interesting.
If I start the list at 0 I will get even numbers out first.

I think I have an interesting idea for algorithm to find primes.
The challenge is to make it efficient.
The algorithm is pretty weird and I do not understand all of it.

public static List Search (int max) {
  // Set capacity to guarantee the expected number of primes up to that number.
  var primes = new List ((int)(1.1 * max * Math.Log(max)));
  primes.Add (2);
  primes.Add (3);
  primes.Add (5);

  // Create a queue of numbers that
  var nonPrimeQueue = new Queue ();
  nonPrimeQueue.Enqueue (new FPrime () {Val = 6, Step = 0});
  nonPrimeQueue.Enqueue (new FPrime () {Val = 6, Step = 1});

  int counter = 7;
  var heap = new Utils.BinaryHeap (max);
  // Create a buffer that tells which numbers we have processed.
  var buffer = new ulong[2 * max / 64];

  int j, i, listCount, added, val, prime, primesCount, min, heapCount = 0;
  bool isPrime;
  FPrime fp, np;
  while (counter < max) {
    listCount = nonPrimeQueue.Count;
    added = 0;
    for (j = 0; j < listCount; ++j) {
      fp = nonPrimeQueue.Dequeue ();
      primesCount = primes.Count;
      for (i = fp.Step; i < primesCount; ++i) {
        prime = primes [i];

        val = fp.Val + prime;
        // First, do not add prime to numbers that do not fit.
        // Second, check buffer in case we have processed the number.
        // Third, if less than counter, ignore number.
        // Fourth, do not search beyond twice the max limit.
        if (fp.Val % prime != 0 ||
          // 'val >> 6' == 'val / 64'.
          // 'val & 0x1F' == 'val % 64'.
          (buffer[val >> 6] & (1UL << (val & 0x1F))) > 0 ||
          val < counter ||
          val <= max << 1) {

        // It needs to search a little above the max limit,
        // this is why there is a '2 * max' there.
        np = new FPrime () {Val = val, Step = i};
        nonPrimeQueue.Enqueue (np);
        heap.Push (np.Val);
        buffer[val >> 6] |= 1UL << (val & 0x1F);

      if (fp.Val > counter) {
        fp.Step = primes.Count;
        nonPrimeQueue.Enqueue (fp);

    if (heapCount == 0 && added == 0) {
    if (heapCount == 0) {

    while (heapCount > 0) {
      min = heap.Peek ();
      if (min < counter) {
        heap.Pop ();
      } else if (min == counter) {
        heap.Pop ();
        counter += 2;
      } else if (min >= counter + 1) {
        isPrime = true;

        // Seems to be unecessary to check all primes.
        // Not sure how it works, maybe as approximation to square root of prime.
        primesCount = (int)(Math.Sqrt (primes.Count) * 0.797724036);
        for (i = 0; i < primesCount; ++i) {
          if (counter % primes[i] == 0) {
            isPrime = false;

        if (isPrime) {
          primes.Add (counter);

        counter += 2;
      } else {

  return primes;

Here is a naive prime search:

public static List SearchNaive (int max) {
  var primes = new List ((int)(1.1 * max * Math.Log(max)));
  primes.Add (2);
  primes.Add (3);

  int next, i, end, primesCount;
  bool isPrime;
  for (next = 5; next < max; next += 2) {
    isPrime = true;
    end = (int)Math.Sqrt (next) + 1;
    primesCount = primes.Count;
    for (i = 1; i < primesCount; i++) {
      if (primes[i] >= end) {
      if (next % primes[i] == 0) {
        isPrime = false;

    if (isPrime) {
      primes.Add (next);

  return primes;

When running these two side by side, the new algorithm performs 75-80% of naive.
It beats a non-optimized naive method though, so it is not bad.

In binary comparison we have two states: "is" and "is not".
In set comparison we have also two states: "larger" and "smaller".
Set comparison differs from binary comparison in semanthics.

In order comparison we have three states: "equal", "larger" and "smaller".
Order comparison is a superset of set comparison.

We have an "idea machine" that generates possible solutions to a problem.
Assuming all the ideas are unique, we could use set comparison to arrange them.
We define the "smallest" solution to be the best.

If the ideas are not unique, we can use order comparison to check if the idea
has been generated before.
This is the reason why order comparison is useful in normal programming.

However, we might want to filter out obvious bad solutions.
There is no reason to compare one bad solution against another,
as long we know both of them are bad.
This requires us to use binary comparison to tell if a solution "is" or "is not" bad.

Sometimes we do not know what a good solution will look like.
It may be multiple solutions that are very different.
We might not be able to tell which of them is "smaller" or "larger",
but we know neither of the solutions are bad.

Multiple solutions might be equivalent and have many forms.
Often it is hard to tell whether two solutions are equivalent or not.
We can try to transform both solutions to a "smaller" form and see if they match.
If we fail to match them, it could be possible one of the solutions needs to
be transformed into a "larger" form and then into another smaller form.
This can be illustrated using a tree:


One path can lead to a "dead end".

We do not know whether two solutions are equivalent or not in general.
No matter what algorithm we use to determine "smaller" and "larger",
it can be wrong as long we do not know the "smallest" solution.
To have the correct algorithm means we know the "smallest" solution in every case.

This is a difficult situation because the solution to a such problem is the algorithm itself.
How do we know whether one algorithm is "smaller" or "larger" than another?
The simple problem we started with gets replaced by a more complicated one.

What we can do to make the problem smaller is removing bad solutions.
The problem then is to know what is a relative bad solution.
If there is a complicated problem, it increases the possibility to mistake
a possible solution for a bad one.

We could write a lot of test cases and have some kind of genetic algorithm.
The problem is then to find the best genetic algorithm to use.
Whatever problem we started with, there appears a more complicated problem
to solve the simpler one.

Knowing what is "larger" and "smaller" is hard, but the major problem.
I have two solutions A and B which can be transformed into A' and B'.
A < B is not true if A' > B'.

We can make a "hypothesis" about what is "smaller" and "larger".
Whenever we find a counter example, we can add the counter-example to the hypothesis.
The problem is that a hypothesis can have infinite counter-examples.
If this happens, it is hardly justifiable to say the hypothesis is true.

Two hypotheses can be contradicting each other,
which means at least one of them has to be false.
How do we know when two hypotheses contradict each other?
When they disagree about whether two solutions are "smaller" or "larger".

For very complex problems, it can be unmanagable to keep track of all this information.
Every system we clearly can tell is "smaller" or "larger" can get us closer to a perfect algorithm.
I think this is interesting to investigate.

I recently found a cool way to create persistent data structures.
"Persistent" means the data can be restored to an earlier version.

When you deal with stuff that changes meaning over time,
it gets confusing if you use "time" to describe the order.
Instead of "time", I will use the word "step".
One step preceeds another and you can "go back one step" in a persistent structure.

My idea is very simple:
A persistent object contains a reference to the previous step.
The previous step has the same type as the persistent object.
You can have as many steps as you like to.

A <- B <- C

Very often, when we create a step, the value has not changed from the previous:

A <- B <- B <- B <- C

We can save memory by giving each step a counter:

A (1) <- B (3) <- C (1)

If we go back 3 steps, we will decrease the counter:

A (1) <- B (3) <- C (1)
A (1) <- B (3)
A (1) <- B (2)
A (1) <- B (1)

A structure X contains multiple persistent objects.
Each time we call "X.Store()" it creates a step.
Each time we call "X.Restore()" it goes back one step.

A persistent structure can be complex without using lot of memory.
The parts of it that does not change will only increase a counter per step.

A persistent list is a list that can be restored to a previous step.
For example, if you call "list.Store()" and add a new item A to the list,
you can call "list.Restore()" and A will disappear from the list.

When the size of a persistent list is unchanged,
we can use a counter to count the steps instead of copying the list.
This is allowed because the items on the list are persistent.
Each time we call "list.Store()" and "list.Restore()" the items on the list
takes care of the state.


A persistent dictionary is a list that can be restored to a previous step.
Just like a list, you can call "dictionary.Store()" and "dictionary.Restore()".

When the size of a persistent dictionary is unchanged and
the keys are unchanged, we can use a counter to count the steps.
This saves a lot of memory.


I am trying to come up with a way to do redo/undo with persistent data structures.
The problem with the way I designed persistent objects: It only points backwards.
"Redo" is problematic.

The idea I have is to use 2 persistent objects instead of just one.
One of these objects are called "Current" and the other is called "Future".
The "Current" object is manipulated by the actions and tools.

The "UndoRedo" object keeps track of available steps.
It does this without changing the object reference of "Current".
In addition it also gives each action a description for display in history log.

I thought about a structure that is powerful enough to model everything.


The most interesting field is 'RelationId'.
How to interpret the data can depend on this field.

One use is to create a loosely structured database for research.
Each time we collect data, we create a new 'RelationId'.
This serves as a context for the data we are currently working on.

The structure can be used when we are too lazy to design one.
It should be sorted by 'Id' to make lookups fast.
Lists can be designed by referring multiple times to same parent.

One particular interesting application is when relations are not necessarily true.
If there are multiple alternatives we can work on them one by one.
By using relations, we can also induce the type of parent and child.

One problem I have been working on is dynamic concurrent processing.
My inspiration for this problem is crawling web sites.
This is a process that takes time, so it may become a while before you get response.
Each time we receive a response, we want to analyze the site and get the links.
The links goes back to the list of web sites we want to process.

If we process each link sequentially, we will have to wait for response before continuing.
A faster solution is to send requests in parallel and order the responses the same way we sent the requests.
This will make sure we can analyze the web pages in the same order as sequentially, but faster.

As simple as it sounds, doing this efficiently and safely is really complicated.
The naive method is to create one thread for each request and communicate through an object that only one thread at a time can access.
However, there is no reason to use multiple threads when most of them just sits waiting.

The root of the problem is how to represent the tasks that can be run in parallel.
Should we be able to pause the process and continue later?
Should we be able to cancel the ongoing tasks when we pause?
Can we just ignore the ongoing tasks?

In order to achieve anything of this
we need an object representing the model of concurrency.

For this purpose I suggest representing tasks as a list of task objects.
States can be represented as a list of state objects.
A state object contains the number of tasks in that state.

I like this representation because we can have many states.
Each state has a delegate that starts the task that returns true if the task can be started.
Each state has a delegate that returns true when the task is done.

Each state "pushes" the next if there is no tasks ready.
It does not matter how many tasks are in the previous state.
What matters is how long time since the last task was received.
We can think of the whole pipeline having a update frequency.
At each 'tick' we check the first state in list and see if anything is ready.
If there is no ready task, we check the next state.

It all runs on the same thread, so tasks can emit new tasks when done.
This makes the algorithm suitable for crawling web sites.

The name I chose for this algorithm is "lazy pipeline".
The code for it can be found here.

If you pick a random integer, the chance it is divisible by X is 1/X.
For example, if X = 2, then the chance of a random integer is divisible by 2 is 1/2.

If you want to find the chance of a random integer being divisible by 2 and 3,
you can multiply 2 and 3 together 1/(2*3) = 1/6

The chance of 2 and 4 is 1/2, because divisible by 4 is a subset of divisible by 2.
We can write this as P(mod 2 intersect mod 4) = 1/2

P(mod 2*3 intersect mod 3*5) = 1/(2*3*5)

Generally speaking, we can use this formula:

P(mod X intersect mod Y) = (X intersect Y) / (X*Y)

The '(X intersect Y)' part means the common factors of X and Y.

When we want to find the chance of divisible by X or Y,
we can use the following rule:

P(mod X union mod Y) = 1/X + 1/Y - (X intersect Y) / (X*Y)
= (X + Y - (X intersect Y)) / (X*Y)

Example: P(mod 2*3 union 3*5) = 1/(2*3) + 1/(3*5) - 3/(2*3*3*5)
= (3*5)/(2*3*3*5) + (2*3)/(2*3*3*5) - 3/(2*3*3*5)
= ((2*3) + (3*5) - 3)/(2*3*3*5)

'X + Y' is linear to '(X intersect Y)',
so we can write X + Y - (X intersect Y) = (Z + W - 1)*(X intersect Y)

P(mod X union mod Y) = (Z + W - 1)*(X intersect Y)/(X*Y)
= (Z + W - 1) * P(mod X intersect mod Y),
Z = X except (X intersect Y)
W = Y except (X intersect Y)

The chance of a random number being divisible by X or Y
equals the chance of it being divisible by X and Y
multiplied by a number equal to the sum of distinct factors minus one.
A shorter way to write this is:

P({X+Y}) = ({X-Y} + {Y-X} - 1) * P({X*Y})
P({X*Y}) = {X*Y}/(X*Y)

The brackets means we use Boolean algebra on the set of factors.
The general case for 3 variables with no common factors:

P({X+Y+Z}) = 1/X + 1/Y + 1/Z
- (1/(X*Y) + 1/(Y*Z) + 1/(Z*X))
+ 1/(X*Y*Z)
= (Y*Z + Z*X + X*Y - X - Y - Z + 1)/(X*Y*Z)

The general case for 3 variables with or without common factors:

P({X+Y+Z}) = 1/X + 1/Y + 1/Z
- ({X*Y}/(X*Y) + {Y*Z}/(Y*Z) + {Z*X}/(Z*X))
+ {X*Y*Z}/(X*Y*Z)
= (Y*Z + Z*X + X*Y
- Z*{X*Y} - X*{Y*Z} - Y*{Z*X}
+ {X*Y*Z})/(X*Y*Z)

This can be also written as a whole number multiplied with chance of all 3 factors:

P({X+Y+Z}) = ({Y-X*Y*Z}*(Z-{Z*X}) + {Z-X*Y*Z}*(X-{X*Y}) + {X-X*Y*Z}*(Y-{Y*Z}) + 1) * P({X*Y*Z})

The set {X*Y} is always a subset of X and Y.
Likewise, the set {X*Y*Z} a subset of X, Y and Z.
{Y-X*Y*Z} is Y unless X, Y and Z contains a common factor.
Z-{Z*X} on the other hand is Z-1 if Z and X contains no common factor.
In general, we can find a whole number N
relating the chance of divisible by any number
to the chance of divisible by all numbers in a list:

P({∑iXi}) = N * P({∏iXi})

N is the same no matter how many duplicates there are of same number in the list.

There are two properties of the world that I think are characteristic:

  1. Geometrical problems
  2. Concurrent problems

As computer gets faster and connected,
geometrical and concurrent problems are not that significant for software.
If we created an artifical intelligence, it would not need the same
mental capacity to process geometrical and concurrent problems,
only when it tries to push the limits on a much higher level than humans.

Take for example the limit of human comprehension.
There is only a small amount of information humans can process per second.
Using that little information, one makes decisions that mostly benefit oneself.
When we combine this with scarce resources and lots of humans,
we understand humans are not equiped with mental capacity to analyze the situation.

Seen from the perspective of artifical intelligence,
which posess neither such strict geometrical or concurrent limitations,
the humans look hopelessly tangled into their wars and conflicts.

What bothers me is the lack of consensus of how to approach these problems.
The right method is of course to use mathematics instead of ideology.
I prefer to look at the limitations of humans as a mathematical problem,
which may be studied independently of the situation in the world.
The goal in this pursuit is to develop methods and language to describe and solve such problems.

If we have an object in a state and the probability for that object
to leave that state within one time unit is P, we should be able to pick a random number
between 0 and 1 and get how long the object remains in same state.

So we pick a random number R between 0 and 1.
The probability of the object leaving that state in N time units is:

1-(1-P)^N = R

By solving this, we get:

1-R = (1-P)^N
ln(1-R) = N*ln(1-P)
N = ln(1-R)/ln(1-P)

Using this formula, we can simulate objects changing states
without computing every step.

I figured out that is you observe objects remaining in same state over time,
you can compute the probability P of leaving state by one time unit from average time N.
I guessed the formula and confirmed it using numerical evidence:

P = 1-(1/e)^(1/N)

You can also compute the average time N from probability P:

N = -1/ln(1-P)

This might be used to compute expected values of N in a Markov chain.

If you have a probability changing over time for object to remain in a state,
you can simulate the time N spent in the same state using a loop.
First you need a concept of "minimum time" which is the moment you N is relative to.

  1. Compute simulated time using probability P per time unit.
  2. If simualted time is more than time interval to next probability change,
    set minimum time to next probability change and goto 1.

When you collect data about real world processes,
you can convert frequency of leaving state over time to probabilities.
The probability can be calculated using this formula:

Pi = Fi / (∑j = in Fj)

Only the data at a later moment is relevant for the probability P of leaving state.
The formula above assumes setting P constant for a time interval is a good approximation.

An alternative way is sum frequencies up to each point in time and divide by total sum.
This can be sorted as a list of values from 0 to 1 where each item has an associated time.
To simulate you pick a random number between 0 and 1 and look up time using binary search.
This method is faster but assumes you got enough data to make a good approximation.

Different states can have different distribution over time.
Using the first method, you can treat change of probability as a vector.
The vector has one dimension for each new state.
The sum of the dimension values is the probability of changing state.
Using the second method require the data to have the same unit of time.
Both methods require you to have a vector for each time interval.

A canon rotates around with a probability P of firing.
We can simulate firing in a direction by computing the time N from P
and find the angle by multiplying with the velocity of rotation.

The canon moves in the opposite direction when it fires.
The bullet might hit another canon.

The nice thing with randomness is we can simply cancel the information we do not need.
In this example there are only two types of events that can influence the future.
The first one is the next canon that fires.
The second one is the next canon hit by a bullet.
In both cases, we need to look at the event that happens first.

When we simulate a random event, it is merely hypothetical.
It becomes only real if nothing else happen that prevents it from happening.
We use the fact that nothing happened until it was disturbed.
Randomness allows us to jump ahead to that point and then redo calculation.

This is general for all "hypothetical events".
As long as we can describe behavior by some fixed set of information,
the behavior is predictable and gives room for simulated hypothetical events.

The average time is different from average time per event.
To find the average time we use the formula:

N = -1/ln(1-P)

For example, in a queue of 30 people that needs treatment,
we can sum up all the hours of treatment and divide by 30 to get average time.

The average time per event is when 50% of the events have happened:

N = ln(1-0.5)/ln(1-P)

This is relative to when all treatments start at the same time.
I need to keep and eye on such subtle differences.

When you know size and average value of a population,
and study a minority, you can also learn something of the majority:

avg-m = (N*avg-Nm*avgm)/(N-Nm)

'm' in subscript means "minority".
'-m' in subscript means "not minority".

The reason is that the minority is a subset of the population.
If the minority is not a subset, it becomes more difficult.

In special relativity, when a stationary observer watches a photon between mirrors A and B moving at velocity 'v', we have the following equation:

v2(tB' - tA')2 + c2(tB - tA)2 = c2(tA' - tB')2

The mirrors are faced perpendicular to the direction of the velocity.
The equation above is identical to the following one:

(v/c)2 + ((tA - tB)/(tA' - tB'))2 = 1

This means velocity and ratio of time relates to each other
as x and y component of a unit vector.

Because everything has a relative velocity to everything else,
everything also has a relative time to everything else.

The highest rate one system can move in time relative to another is 1.
This is when they move in the same reference frame.

When I use my "best fit" algorithm in 2D,
it seems an easy connection to complex numbers.
I treat the points ((X, Dx), (Y, Dy)) as complex numbers with dual coefficients.
The algorithm in Einstein notation is the following:

W = (A - avgA)i (B - avgB)i
Ci = (A - avgA)i normW + avgB

A is the object in rest of any coordinate frame,
B is the deformed object with forces acting on each particle.
The normW variable transforms points from rest to the restored object.
In addition it also calculates the acting force in the dual part.

This algorithm is much simpler than any other solution I have seen.
Also I do not know any algorithm that computes the limit of the forces.

To reinterpret why this works, I might reinterpret the complex numbers.
I now need to see them as a rotation of a vector (r, 0).

(r + i0)e = rcos(α) + irsin(α)

I assume 'i' can be expanded to a vector.
The laws of this vector is according to Grassmann product or Levi-Civita symbols.
All points in any space should be determined by three components:

q = (r, α, i) = re

The conjugate is the same as changing the sign of α or i:

e-iα = ei(-α) = rcos(-α) + irsin(-α) = rcos(α) - irsin(α) = q*

Therefore we have the following equivalence:

(r, α, i) = (r, -α, -i)

We can think of it as a line in space.
If we set α = 0, the way two such systems relate to each other is by real numbers:

(r1, 0, i1) * X = (r2, 0, i2), X ∈ Real

Another way to describe this:

qq* = 1

If we have same complex dimensions, two such systems relate to each other by complex numbers:

(r1, α1, iμ) * X = (r2, α2, iμ), X ∈ Complex

I have figured out a way to take logarithm of a complex number without square root:

ln((r, α)) = ln(r) + iα

Usually, one calculates the length of a complex number by using square root.
We can avoid this by using the formula:

ln((r, α)) = ln(r2)/2 + iα

I need a formula of computing the power of complex numbers.

ab = eb*ln(a)
e(b0 + ib1)*(ln(ra2)/2 + iαa) = e(b0ln(ra2)/2 - b1αa) ei(b0αa + b1ln(ra2)/2)
rab = e(b0ln(ra2)/2 - b1αa)
αab = b0αa + b1ln(ra2)/2

I need a formula for computing arctan of a complex numbers.
From this website I have the formula:

w = 1/(2i) * ln ((1 + iz)/(1 - iz))

I found a loop hole by using the logarithm of complex numbers.
The logarithm depends on arctan of the sub type, so it works.

I need a formula for computing exponent of complex number.

ea0 + i*a1 = ea0ei*a1
= ea0(cos(a1) + i*sin(a1))

I have been thinking more of systems that remain in same state
with probability 'p' over time.
My goal is to find a way to simulate such systems
with a continious changing probability.
In order to get closer to that goal,
I need to understand infinitesimal probability of time.
Infinitesimal probability can be found by thinking of two different units
of probability over time, for example 'per hour' or 'per minute'.
The expected time can be converted from the smallest time unit to the larger
by multiplying with a factor 'k'.
If this factor 'k' grows to infinity we get infinitesimal probability:

ln(1-r)/ln(1-pε) = ∞ln(1-r)/ln(1-p)
1/ln(1-pε) = ∞/ln(1-p)
ln(1-pε) = ln((1-p)ε)
1-pε = (1-p)ε
pε = 1 - (1-p)ε
pε = -ln(1-p)ε

In theory infinitesimal probability can have any value between 0 and infinity.
This is because it is a coefficient of ε = 1/∞ which is infinitesimal.
In practice the coefficient is very low.
For example, a probability of 0.999999999999999 gives us 34.53957599234088ε.
Which means we can treat probability 1 as a special case.

We have to keep track of what probability per time unit we convert.
To convert from infinitesimal probability to normal probability:

p = (1 - e-pε)∞

When we plot infinitesimal probability, we have to keep in mind
that the time axis is infinitely larger than the pε axis.

If we pick a random number between 0 and infinity,
the system leaves the state if the value is less than pε.
This follows from the reasoning that expected time in same state is given by:


And the time that is more than 1 belongs to another probability,
when the probability changes continiously with time.
This means for random values 'r' less than 'p', the system changes state.

When we plot continious probability over time,
we think of the graph showing the corresponding probability to a unit of time.
If we could do an infinitesimal calculation,
the actual probability used in this calculation
would be the infinitesimal probability.
As the number of infinitesimal calculations grows
the simulated behavior approaches the continious changing probability.

Our problem is that infinitesimal calculations are not practical.
Is there a way to compute expected time directly in one operation?

Word count:


Library guide Read this to understand how to use this library.
Calculator (With experimental, but simpler mathematical notation)
Boolean Algebra Helper (My approach to Boolean algebra, with arithmic notation)
Word Counter (Filter out common words from a text to see if it's anything new information in it)
Call Methods (Find methods that are called in same class by pasting in code)
Complexity Levels (Computer 2X iterations between 7.5 and 15 seconds)

Back to the top