Ugis

-> ESP DESCRIPTION:.

UGIS (UrbanGraffitiInteractiveSimulation) is a digital graffiti system that captures some specific physical source [laser, cellular, wiimote] and synthesizes it simulating being a spray on a surface.

UGIS is based on 4 fundamental characteristics:
1. Multiplicity of elements to be used as a spray [laser, cellular, wiimote, etc.]
2. Freedom in the field of projection. Being able to project on external or internal walls, and on irregular surfaces such as a car.
3.Gesture recognition system. That allows to recognize movements, or set of certain transformable movements in an action or in a graphic element. Thus, drawing a triangle can be replaced by a certain image.
4. Multitracking. Which allows to operate with several elements in simultaneous. For example two users with a laser each drawing on the same wall.

UGIS is entirely made in Processing using a modified version of the library BlobDetection of V3GA

This system is based and is a wider and deeper exploration of the experience made by the people of Graffiti Research Lab

-> ENG DESCRIPTION.:
.
UGIS (UrbanGraffitiInteractiveSimulation) is a digital graffiti system that get some determinate physic source (laser, cellphone, wiimote) and synthesize it simulating be a spray over a surface.
UGIS is based on 4 fundamental characteristics
1. Multiplicity of elements to use as spray (laser, cell phone, Wiimote, etc.)
2. Freedom at the projection aspect. Being able to project outside or inside walls and over irregular surfaces, for example a car.
3. Gestures recognize system. Allow to recognize determined movements transformable in an action or in a graphic element. Thus draw a triangle can be replaced by a determined image.

4.Multitracking Which allow working with many elements at the same time. For example, two users with a laser which one painting over the same wall.

UGIS is totally built in Processing using a modified version of BlobDetection library by V3GA

This system is based and is a wider and deeper exploration of the experience made it by the people of Graffiti Research Lab

 

Ramas

Video made with a version demo of a functional possible universe of this platform. This is the concrete intersection of the theorical situation of the programming system based on ecosystems and the idea we have about the future (contained in the video as text). Both ideas match in the operation of the branching system as an evolutionary organ. As own character beyond of each particular item.

Diverse systems for the algoritmic generation of audio or video exist . These go from the composition random to the use of neuronal systems.
In our work we observed and we tried to decode the natural components of the beauty. We found in this exercise that the function of a system, not perfectly decodable, but delimited and with clear norms aid to the generation of this natural estetic. It is the estetic of life, the ecosystems.

In an ecosystem we see the moving of objects in reaction and like conclusion of present or previous acts of other elements of the same atmosphere. This ordered system does not imply “order” or chaos, can generate any type of stress in the attitude and the unfolding, but it is the beauty that we see the existence of that predictable and determined system in parts. Is the natural overtone that even can be the greater one of the chaos.

With this idea in mind we decided to develop at least in the teoric aspect a new formulation for the programming. Not oriented to objects, not structured, but orienting to ecosystems. In her objects with abilities would be generated, reactions and actions determined in relation to a given atmosphere in which it is, and there are related to other agents of the same or other type. These agents would generate data that would allow the audio-visual composition. In a given atmosphere each agent would react, would be modified, and would modified the atmosphere. This interrelation we could synthesize it like “character”.

What is tried with this method of programming is exactly to generate of algoritmic form, character in the audio-visual pieces, to do not see the piece like a logic sequence or random, but with impetus given by the character of these agents. In part, so that it loses the cold of the metal and the circuits of tecnology and wins a little soul.

Studying possible methods to take ahead a language of this style we principle decided to lay the foundations of a possible universe. For that we structured the language in a given teoric field. We can determine that any ecosystem is circumscribed to a space, fisic and/or psiquic. In our case (humans) a space of 3 dimensions. In where data travel and are caught in a sphere of 5 perceptivos or felt fields. These data generally contain a high range of values, of which we single perceived some. As reaction to these data we generate new data that catch others and so establish the interrelations

There are several systems for the algorithmic generation of video or audio. These range from random composition to the use of neural systems.
In our work we observe and try to decipher the natural components of beauty. We find in this exercise that the function of a system, not perfectly decipherable, but if delimited and with clear rules helps the generation of this natural aesthetic. It is the aesthetics of life, they are ecosystems.

In an ecosystem we see displace objects in reaction and as a conclusion of current or previous acts of other elements of the same environment. This orderly system does not imply “order” or chaos, it can generate any kind of stress in the attitude and development, but it is the beauty that we see the existence of that predictable and determined system in parts. It is the natural harmony that can even be the greatest of chaos.

With this idea in mind we decided to develop at least in the theoretical aspect a new formulation for programming.  Not oriented to objects, unstructured, but oriented to ecosystems. It would generate objects with skills, reactions and actions determined in relation to a given environment in which it is located, and in which it is related to other agents of the same or another type.
These agents would generate data that would allow audiovisual composition.

In a given environment, each reactionary agent would be modified, and in turn modify the environment. We could synthesize this interrelation as “character”.
What is attempted with this method of programming is precisely to generate an algorithmic character in the audiovisual pieces, that the piece is not seen as a logical or random sequence, but with an impetus given by the character of these agents. Partly to lose the cold of the metal and the circuits of technology and gain a bit of soul.

Studying possible methods to carry out a language of this style we decided in principle to lay the foundations of a possible universe.
For that we structure the language in a given theoretical field.
We can determine that any ecosystem is circumscribed to a space, physical and / or psychic. In our case (human) a space of 3 dimensions. Where data travels and are captured in a sphere of 5 perceptual fields or senses.

These data usually contain a high range of values, of which we only perceive some. As a reaction to these data we generate new data that captures other data and thus interrelationships are established.

We decided then to synthesize this system to determine the guidelines of a language structured for an ecosystem. We say then that this must have a general Atmosphere. In this atmosphere it isdeterminate how many are the possible percetibles fields totally in N possibilities (an object may be single perceive 2). Working on writings of Ouspensky we determined that the perception of the space is given by the perception of the senses and not the inverse way. For that reason we determined the importance and relevance of order where followed by the atmosphere are the perceptivs channels, the data that travel by the universe.

Each one of these fields we will call it channel. Channel is a definition where delimited data travel between a minimun and a maximum.

The following component of this teoric concept are the objects by themselves. The objects are elements that can be registered (to acquire the capacity to perceive) to certain channels. With this registration they receive data from the channel.

They can also apply filters to this entrance, for example:
Scaling of the value to an order of minimun and a maximum
Limitate certain rank of the data that enter.

To reduce the definition of the data

Each object can have N dimensions of movement. The use of N dimensions and N fields is to take this language to a usefull abstraction for the programming and not that in short time becomes in a series of repeated systems. With this abstraction unreal atmospheres and ecosystems can be made but possibles at matematical level and usefull at audio-visual level . The objects also contain a buffer of the recived data previously in each channel. Now the important thing is to determine the type of data that travel by the channels. For that there are 2 types of generators. External generators and the objects themselves like generators. The external generators are objects that emit data on a channel. The generating objects as can be the reaction to a channel (on the part of an object) overturned in the same channel or another one or the function of a channel like constant generator. The data can be constant or to be certain to a location in N dimensions and reaches.

Another important abstraction is the one of the affinity. This determines the reaction of objects between them by the closer or the distance with intrinsic values, and by the reception of data emitted by another object.

The affinity is controlled by a table that has each object on each channel. It has a unique dimension of long N. In this space, points of repulsion and affinity with values of incidence and affection in each one are marked. Each object has an own value of situation in that dimension with a certain force. The table gives back then the type of reaction to a given affinity when an object is in the field of inference of its affinity in the space of the corresponding channel.

With all these points we try to lay the teoric and abstract foundations for the elaboration of a language oriented to the recreation of systems of reactive and affective interrelation (ecosystem) which allow the elaboration of generative audio-visual pieces that have own and defined character in their components (spirit).

We decided then to synthesize this system to determine the guidelines of a structured language for an ecosystem.

We say then that this must have a general Environment. In this environment, we determine how many perceptible fields are possible in total N in possibilities (an object can perhaps only perceive 2).

Working on the writings of Ouspensky we determine that the perception of space is given by the perception of the senses and not the other way around. That is why we determine the importance and re-ordering of order where, following the environment, the perceptual channels are found, the data that travels through the universe.

Each of these fields we will call channel.
Channel is a definition through which data delimited between a minimum and a maximum travel.  The next component of this theoretical concept are the objects themselves.

Objects are elements that can be registered (acquire the ability to perceive) to certain channels.  With this registration they will receive data from that channel.

They can in turn apply filters to this entry, such as:

1. Scaling the value to an order of its own minimum and maximum
2. Limit the data entered to a certain range.
3. Reduce the definition of the data

Each object can have N dimensions of movement.
The use of N dimensions and N fields is to take this language to a useful abstraction for programming and not in a short time it becomes a series of repeated systems.

With this abstraction, unreal environments and ecosystems can be created, but possible at the mathematical level and useful at an audiovisual level.
The objects also contain a buffer of the data previously received in each channel.

Now the important thing is to determine what type of data travels through the channels. For that there are 2 types of generators. External generators and the same objects as generators.

External generators are objects that emit data on a channel.
The objects as generators can be the reaction to a channel (by an object) dumped in the same channel or in another or the function of a channel as a constant generator.

The data can be constant or determined at a location in N dimensions and at a range.

Another important abstraction is that of affinity. This determines the reaction of objects to each other by proximity or distance with intrinsic values, and by the reception of data emitted by another object.
The affinity is controlled by a table that each object has on each channel. It has a single long N dimension. In this space points of repulsion and affinity are marked with values ​​of incidence and affection in each one. In turn, each object has a specific value of situation in that dimension with a certain force.
The table then returns the type of reaction to a given affinity when an object is in the inference field of its affinity in the corresponding channel space.

With all these points what we intend is to lay the theoretical and abstract bases for the elaboration of a language oriented to the recreation of reactive and affective interrelation systems (ecosystem) that allow the elaboration of generative audiovisual images that have their own and defined character in their components (spirit).

Pixelsticker

 

Synthesis
This is a free open source application developed in Processing .
It allows to generate the specifications to be able to create a mural with an image formed of small parts that are stickers.
Synthesis
This is a free application open source developed with Processing .
It allows to generate the specifications to be able to create a wall with a formed image of small parts that are stickers.
Concept
Each sticker with its own design is equivalent to a color in the image. Thus, the concept of pixel is exported to the urban, and a line of introspective language is drawn, where the pixel itself (the sticker) can dialogue with the general image.
It allows a long discourse between each one of the pixels (stickers) in itself, among them in relation to the others, in relation to the general image and another one in the general image by itself.
Concept
Each sticker with his own design is equivalent to a color in the image. By that way, the concept of pixel is exported to the urban, and an introspective line of language is drawn, where the pixel itself (the sticker) can engage in a dialogue with the general image.
It allows a long speech between each one of the pixels (stickers), between them in relation with the other ones, in relation with the general image and other in the general image itself.
Generated Specifications
The program requests as data the dimension of the mural (in cm). The original image The number of stickers to use, and the size of these (in cm).
With this data it returns a PDF with a grid that indicates the location of the stickers, how many are necessary for each of the types, since the gray tone responds to each of them.
General Specifications
The program asks for data of the wall dimension (in cm), the original image, the amount of stickers to use and the measurement of these (in cm).
With that data a PDF is returned with a grid that indicates the location of the stickers, how many are necessary for each one of the types and in which gray tonality responds each one of them.
Free to community
The idea of ​​having the code open to the public is for this application to flow in the community and be developed by the need itself. Today it does not have a graphic interface, it only supports JPG Baseline Optimized. And it generates in the PDF only the data enunciated before. But logically this is not the end.
Free to community
The idea of ​​having the open source to the public is that this application flows in the community and was developed for the same necessity. Today it does not have a graphic interface, it only supports jpg baseline optimized. And generate in the PDF only the data announced before.But logically this is not the end.
>
Use
Nowadays it works as an application within Processing and does not have a graphic interface.The configuration is done by setting a series of global variables that control the name of the image, the PDF to be generated, measurements, etc. Everything is well indicated in the source.
Let’s see if anyone does the graphical interface!
Use
Today it works as application within Processing and does not have a graphic interface. The configuration is made to a series of global variables that controls the name of the image, of the PDF to generate, measures, etc. All this is well explained in the source.
Let’s see if somebody makes the graphic interface !!.

Live

esp
a projection in real time that responds to the intervention of our body. in it you can see a series of virtual bacteria that walk freely until they detect a presence to which they run desperately in search of their flesh to clear their sins and pains.

eng
a real-time projection that responds to the intervention of our body.a projection in which you can see a series of virtual bacterias that walk freely until they detect a presence, towards which they desperately run in search of it’s flesh, to devour their pain and sins.
work [synthesis]
reason
esp

discover one day the terrible sweetness of life, painful truths. the cruelty of a god who has fun exterminating us and testing our stay in exchange for a vain promise. Find a life that is not so stable, nor is it so sure continue being life, the fragility of hard men stuffed with meat. This system that is our body and on which we form magnanimous consciences is just a cluster of cells ordered and arranged for convenience azaroza fragility of each centimeter of our body, each Second of our life is so high that my existence becomes a wonderful

miracle
to one day discover the terrible sweetness of life, the painful truths.the cruelty of a god who finds pleasure both in exterminating us, and putting to the test our very existence, in exchange for a promise made in vain. To find a life that is less stable; a life in which it is not certain that life will continue being life, the fragility of strong, willful men, filled with flesh.that system which is our body, and about which we form our magnanimous consciousness’, is nearly a union of ordered cells conveniently susceptible to the forces of luck and coincidence.the fragility of every centimeter of out bodies, of every second of our lives, is so ample that my very existence becomes a amazing miracle.

GENPUNK

generative music system.
the development of this app isnt but another weapon to serve our daily war against standard save or load presets, sounds, libraries, etc. That kill or limit the endless posibilities of expression of live performance.

based on the concept of TOPLAP movement

commands

enable channel – enable a channel [c1, c2, c3]
disable channel – disable a channel [c1, c2, c3]
vel integer – set the tempo
solo channel – turn on a channel [c1,c2,c3] and turn off another channels
less channel – turn off a channel [c1,c2,c3] and turn on another channels
all – turn on all channels
rnd channel – randomize a channel [c1, c2, c3, all]
rnd channel patch – randomize a channel [c1, c2, c3, all] with patch[0…9]
patch vol att dec sus rel type ind – create a patch on the slot IND with volume VOL[0.0…1.0], attack ATT[frames ej: 500], decay DEC[frames ej: 500], sustain SUS[0.0…1.0], release REL[frames ej: 500], type TYPE[0:sin, 1:quad, 2: tri, 3:saw, 4:saw2, 5:rnd, 6:rnd2, 7:rnd3]
changeused patch int1 int2 channel – change all keys that have the patch int1 with int2 on channel[c1, c2, c3, all]
changeused note int1 int2 channel – change all keys that have the note int1 with int2 on channel[c1, c2, c3, all]
clearused patch channel – set all patch keys with -1 on channel[c1, c2, c3, all]
setused notall int1 channel – set all note keys with int1 on channel[c1, c2, c3, all]
setused patchall int1 channel – set all patch keys with int1 on channel[c1, c2, c3, all]
setused duraall int1 channel – set all duration keys with int1 on channel[c1, c2, c3, all]
setused note int1 channel pos1 pos2 pos3… – set note keys [pos1, pos2 pos3…] with note int1 on channel[c1, c2, c3, all]
setused patch int1 channel pos1 pos2 pos3… – set patch keys [pos1, pos2 pos3…] with patch int1 on channel[c1, c2, c3, all]
setused dura int1 channel pos1 pos2 pos3… – set duration keys [pos1, pos2 pos3…] with duration int1 on channel[c1, c2, c3, all]
kill allsounds – stop possible sound bugs [duration keys bad implemented, etc]

Crafts

This piece is a whirlwind of data, a flow of digital currents that mold the composition.  The audio is generated by what the camera sees, and the video is modified by the generated sound and the microphone.

In this chaotic composition, a beating, a pulse, the order that forms the selection of continuous chaos is discovered in the observance. It’s the life of the data. It is not a digitized video, it is animation in real time based on algorithms. With capture of audio and video as sensors and directors for the intervention and generation of the piece.

The reccion to the different data is not direct. This is produced by the analysis not only of the moment but of the evolution  of the data. Thus the image not only reacts to the volume  of the moment, but also decides the changes of the camera and the type of camera to select according to the evolution of the variation of the audio spectrum.  This allows you to see richer compositions as a video piece instead of an almost random sequence of images.

Can we assume that the universe is ruled by chaos?
Maybe chaos is the sum of infinite orders. Orders that are orders of other chaos with more orders.  The universe is composed of patterns. There is mathematics in what we breathe, numbers in our speech,  and algorithms in our lives.

it is our straight walk and the splendor the curve that we draw
This work is not so.  It’s just a way to decipher the current orders.
A search for generative and compositional systems and algorithms.
Just an essay to turn the aesthetics of today into a systematic craft.
I think there is a time when the technical search reaches the limit and there you have to find the dialogue.

Audiovisual techniques are at that point, there is no  technological surprise,  and this piece is a sign  that creative processes can be synthesized.
This makes the audioritmic pieces in manufactures or crafts that can be produced in series.  The time has come when we must use the explored techniques to say, to dialogue,  or to shout.

We can suppose that the universe is governed by the chaos? Perhaps the chaos is the sum of infinites orders. Orders that are orders of other chaos with more orders.  The universe is made up of patterns. There is mathematic in which we breathed, numbers in ours speak, and algorithms in our lives.
Is ours straight walk and the splendor is the curve that we drew up. This work is not so.  Is only a way to know the current orders. A generative and compositive search of systems and algorithms. Only a test to turn sistematic craft in the estetic of today.

I believe that there is a little while in which the technical search arrives to limits and is necessary to look for dialog. The audio-visual techniques are in that point, there is no longer a technological surprise, and this piece is a sample that is possible to synthesized the creative processes. That turns the audio-rhythmics pieces in manufactures or crafts that may be tomorrow may be produced in serial.  Arrive the moment in which must use the explored tecnics to say,  to engage in a dialogue, to talk  or scream .

Samples

Sample01
(Build_Cube)
(Create_Mov easy global)
(setVar mieasy (object ACTIVE_PLUGIN getObject))
(onAudioModo2 (object (getVar mieasy) call newPositions))
(Create_Gradient 0x444444 0xff8800 0xaa0077 linear 100 100 100 0)
(addVideoOp post drawImage (object ACTIVE_GRADIENT getObject) screen)

^
Sample02
(Build_Cube)
(Create_Mov easy global)
(Create_Gradient 0x444444 0xff8800 0xaa0077 linear 100 100 100 0)
(addVideoOp post drawImage (object ACTIVE_GRADIENT getObject) screen)

^
Sample03
(Build_Cube)
(setColor 240 30 30)
(setPosition 30 0 0)
(pushModel)
(Build_Cube)
(setColor 40 30 30)

^
Sample04
(define decir
(myTrace hola)
)
(deleteDefinition decir)

^
Sample05
(Build_Cube 60 0 textura1 0)
(Build_Poly (Vector (Vector -100 0 0)(Vector 100 0 0)(Vector 0 100 0)))

^
Sample06
(define decir
(myTrace hola)
(myTrace hola2)
)
(every_frame decir)
(add_to_every_frame (myTrace chau)(myTrace chau2))

^
Sample07
(define decir
(myTrace hola)
(myTrace hola2)
)
(every_frame decir)
(add_to_definition decir (myTrace chau)(myTrace chau2))

^
Sample08
(define decir (myTrace hola))
(define decir2 (myTrace chau))
(Create_Mixer)
(setSwitch 0 decir)
(setSwitch 1 decir2)

^
Sample09
(Create_Filter glow 0x33ccff 0.8 35 35 2 3 0 0)
(Build_Cube)
(setFilter ACTIVE_FILTER)

^
Sample10
(Build_Cube)
(addVideoOp post colorTrans 1 0.3 0.3 0.99 0 0 0 0)

^
Sample11
(videoClrPrev)
(Create_Mov easy global)
(Create_Mov noise global)
(Create_Filter convol 3 3 (Vector 2 -1 2 -1 2 -1 2 -1 2) 5)
(Build_Cube)
(addVideoOp prev colorTrans 1 0.99 0.9 0.999 0 0 0 0)
(addVideoOp post applyFilter ACTIVE_FILTER)
(Create_Gradient 0x444444 0xff8800 0xaa0077 linear 100 100 100 0)
(addVideoOp post drawImage (object ACTIVE_GRADIENT getObject) multiply)
(Create_ParticleSys icon2)
(define chanGrad (Modify_Gradient ACTIVE_GRADIENT(math random 0xFFFFFF)(math random 0xFFFFFF)(math random 0xFFFFFF)linear 100 100 100 (math random 360)))
(Create_Signal chanGrad)
(doOnAudioModo2 1)
(doOnAudioModo1 0)
(doOnEveryFrame 0)

^
Sample12
(Create_Gradient 0x444444 0xff8800 0xaa0077 linear 100 100 100 0)
(addVideoOp post drawImage (object ACTIVE_GRADIENT getObject) normal)
(Create_ParticleSys icon2)
(define chanGrad (Modify_Gradient ACTIVE_GRADIENT(math random 0xFFFFFF)(math random 0xFFFFFF)(math random 0xFFFFFF)linear 100 100 100 (math random 360)))
(Create_Signal chanGrad)
(doOnAudioModo2 1)
(doOnAudioModo1 0)
(doOnEveryFrame 0)

^
Sample13
(Build_Cube)
(Create_Filter convol 3 3 (Vector 2 -1 2 -1 2 -1 2 -1 2) 5)
(addVideoOp post applyFilter ACTIVE_FILTER)

^
Sample14
(Build_Circle)
(Create_Mov easy global)

^
Sample15
(Build_Line (Vector (Vector 0 0 0)(Vector 100 100 0)))

^
Sample16
(Build_Curve (Vector (Vector 0 0 0)(Vector 0 100 0)(Vector 100 100 0)))

^
Sample17
(Build_Text 2 (Vector hola que tal))

^
Sample18
(Build_Forma)
(setAlpha 50)
(doExtrudeAnim ACTIVE_MODEL)

^
Sample19
(Build_Sphere)
(setAlpha 50)
(doExtrude ACTIVE_MODEL)
(Create_Mov easy global)

Reference

Build_Cube

It creates a cube in the center of the scene.

(Build_Cube /scale layer texture isTexturedDinamic/)

Build_Box

It creates a box in the center of the scene.

(Build_Box /w h b layer texture isTexturedDinamic/)

^

Build_Sphere

It creates a sphere in the center of the scene.

(Build_Sphere /size layer texture isTexturedDinamic/)

^

Build_Cylinder

It creates a cylinder in the center of the scene.

(Build_Cylinder /r h layer texture isTexturedDinamic/)

^

Build_Pyramid

It creates a pyramid in the center of the scene.

(Build_Pyramid /r h layer texture isTexturedDinamic/)

^

Build_Quad

It creates a square in the center of the scene.

(Build_Quad /w h layer texture isTexturedDinamic/)

^

Build_Triangle

It creates a triangle in the center of the scene.

(Build_Triangle /w h layer texture isTexturedDinamic/)
^

Build_Poly

It creates a polygon in the center with set points at the vectors.

(Build_Poly /vectors layer texture isTexturedDinamic/)

^

Build_Circle

It creates a circle in the center of the scene, rext and rint are the radiuses, ini and fin the opening and closure angles, for a perfect circle ini must be equal to 0 and fin to 72, works every 5 degrees of the circle. If an inner circle is not desired, then rint must be equal to 0.

(Build_Circle /rext rint ini fin layer texture isTexturedDinamic/)

^

Build_Icon

It creates an icon in the center of the scene, with a referred item and a selected scale (1 is a normal size).

(Build_Icon /icon scale layer texture isTexturedDinamic/)

^

Build_Line

It creates a line in the center with set points at the vectors.

(Build_Line /vectors grosor color alpha layer texture isTexturedDinamic/)

^

Build_Curve

It creates a line in the center of the scene with points set at the vectors using first the inicial vector, then the support vector and finally the output vector. Then the help and exit vectors are repeated.

(Build_Curve /vectors grosor color alpha layer texture isTexturedDinamic/)

^

Build_Node

It creates a node in the center of the scene which can be used to link other models to it.

(Build_Node /layer/)

^

Build_Text

It creates a text in the center of the scene.

(Build_Text /font texto color width size align layer texture isTextured/)

^

Build_Forma

It creates an object of random shape.

(Build_Forma /cantPunt maxx maxy maxz layer texture isTextured/)

^

setColor

It tints the currently active model.

(setColor r g b)

^

setPosition

It sets the position of the currently active model.

(setPosition x & z)

^

setRotation

It sets the rotation of the currently active model.

(setRotation x & z)

^

getPosition

It updates the position buffer to access each of the axis of the currently active model.

(getPosition)

^

getPositionX

Obtains the x position of the currently active model, in order to update it enter (getPosition).

(getPositionX)

^

getPositionY

Same as with the x position.

(getPositionY)

^

getPositionZ

Same as with the x position.

(getPositionZ)

^

getRotationX

Same as with the x position.

(getRotationX)

^

getRotationY

Same as with the x position.

(getRotationY)

^

getRotationZ

Same as with the x position.

(getRotationZ)

^

pushModel

Sets a new model as currently active, saving a buffer to return to the previous one.

(pushModel model)

^

popModel

It frees the buffer of the currently active model and returns to the previous one.

(popModel)

^

pushObject

Same as pushModel but with objects.

(pushObject obj)

^

popObject

Same as popModel but with objects.

(popObject)

^

pushPlugin

Same as pushModel but with plugins.

(pushPlugin plug)

^

popPlugin

Same as popModel but with plugins.

(popPlugin)

^

pushGrad

Same as pushModel but with gradient plugins.

(pushGrad grad)

^

popGrad

Same as popModel but with gradient plugins.

(popGrad)

^

pushFilter

Same as pushModel but with filter plugins.

(pushFilter filter)

^

popFilter

Same as popModel but with filter plugins.

(popFilter)

^

setModColor

If the current model contains variable color in its polygons:esVariable: true/false
variacion1: variation index 1 0-255
variacion2: offset of variation index 1 0-255(setModColor esVariable /variacion1 variacion2/)

^

setAlpha

It sets alpha levels of the current model.

(setAlpha alpha)

^

Currente_Color

It sets the color that will be used for models created from then on in.

(Currente_Color color)

(Currente_Color r g b)

^

setBlur

It sets the blur for the currently active model.

(setBlur blurx blury quality)

^

setFilter

It sets the filters for the object.

(setFilter /filtros/)

^

setLink

Sets links to an object.

(setLink /model x & z rx ry rz/)

^

disableLink

It disables the link mode.

(disableLink)

^

Create_Move

It creates a plugin of continuous movement, which can be of two sorts:
Easy: soft motion towards a new position.
Noise: movement that resembles a push.
The object has to be assigned, if “global” is typed in, the whole scene will be considered the object.(Create_Move tipo obj))

^

Create_Gradient

It creates a gradient object.
The mode corresponds to the fill mode: linear or radial.(Create_Gradient color1 color2 color3 modo alpha1 alpha2 alpha3 angle)

^

Modify_Gradient

It modifies an existing gradient object.
The mode corresponds to the fill mode: linear or radial.(Modify_Gradient objeto color1 color2 color3 modo alpha1 alpha2 alpha3 angle)

^

Create_ParticleSys

It creates a particles plugin.
Icon: is the element that will be used as a particle.
linked: reveals wether its linked to an object.
linkeda: indicates the object its linked to, if “global” is entered it will refer to the scene in general.(Create_ParticleSys /icono linked linkeda/)

^

Create_Mixer

It creates a mixer.

(Create_Mixer)

^

Create_Signal

It creates a signal that executes a determined function on every frame or in response to an audio event.

(Create_Signal func)

^

Create_Filter

It creates a filter object, the parameters are the same as the ones noted in Flash documentation.
Types: glow, bevel, blur, color, convol, shadow, gradbevel, gradglow.(Create_Filter type /params…/)

^

setVar

It sets/creates a variable. If “temp” is entered, the system uses a temporary variables in the function.

(setVar nombre valor /”temp”)

^

getVar

It gives back a variable. If “temp” is entered, the system uses a temporary variables in the function.

(getVar nombre /”temp”)

^

object

It operates on a given object.
Object: Object’s name, linked variable or constant reference.
Action: set: Sets a property val1 with val2.
Get: It gives back the property val1.
getFN: Returns the result of the method val1 using val2 as a parameter.
call: ejecutes the method val1 using val2 as a parameter.
getObjetc: It gives back a pointer to the object.(object objeto accion /val1 val2…/)

(object objeto accion /val1 val2…/)

^

onAudioModo2

Commands action/list to be performed at the event AudioModo2: an event that is launched when the difference between audio volumes in 2 frames is equal or larger than a certain value.

((onAudioModo2 (acc)/(accs…/)

^

addVideoOp

It adds a process to video operations. These define the buffer and what will be shown in each frame on the screen.
The pipeline is: actions prev-> renders 3d -> actions post
order: prev o post
actions:
clear: clears the buffer
drawImage (image mode): draws an image, it can be a gradient, in a certain mode (screen, add, normal, overlay, etc.) see Flash modes.
colorTrans (ra ga ba aa rb gb bb ab /buffer/): it applies a color transformation, if a buffer is assigned, it operates on the buffer not on the output.
applyFilter (filter /buffer): it applies a filter, if a buffer is assigned, it operates on the buffer not on the output.
drawRender: it draws the 3D raw render.
saveRender (index mode): it saves the image to a video buffer in a set mode (screen, add, normal, overlay, etc), see Flash 8 modes.
drawFromBuffer (index mode): it saves the image from a video buffer in a set mode (screen, add, normal, overlay, etc), see Flash 8 modes.
(addVideoOp orden accion /+params/)(addVideoOp orden accion /+params/)

^

createVideoBuffer

It creates a new video buffer for operations.

(createVideoBuffer ind)

^

videoClrPrev

It clears the preview list of video operations.

(videoClrPrev)

^

videoClrPost

It clears the list of video post operations.

(videoClrPost)

^

videoRenderModeNormal

If it renders the video normally (always) or does so through user operations: 0,1,
mode: draw mode (screen, add, normal, overlay, etc), see Flash 8 modes..(videoRenderModeNormal do mode)

^

every_frame

An actions or commands list to be executes on every frame.

(every_frame (acc)/(acc..)/)

^

define

It defines a function with a set list of actions or commands.

(define nombre (acc)/(acc..)/)

^

getModel

A certain model becomes the active one.

(getModel modelString)

^

AudioPort

It operates or gets data from the audio port.
accion: getLevel: retrieves the current volume level.(AudioPort accion)

^

vecToColor

It contains the r g b values in a numeric color value.

(vecToColor r g b)

^

deleteModel

It deletes a certain model.

(deleteModel model)

^

deleteAllModels

It erases all models.

(deleteAllModels)

^

deletePlugin

It deletes a plugin.

(deletePlugin movobj)

^

deleteAllPlugin

It deletes all plugins.

(deleteAllPlugin)

^

deleteGrad

It eliminates a certain gradient.

(deleteGrad gradient)

^

deleteAllGrad

It eliminates all gradient objects.

(deleteAllGrad)

^

deleteFilter

It eliminates a certain filter.

(deleteFilter filter)

^

deleteAllFilter

It eliminates all filter objects.

(deleteAllFilter)

^

deleteAll

It eliminates all models, plugins, gradients and filters.

(deleteAll)

^

deleteVar

It eliminates a variable, if “temp” is entered it eliminates a temporary of the function.

(deleteVar var /temp)

^

deleteAllVar

It eliminates all the variables.

(deleteAllVar)

^

deleteDefinition

It eliminates a definition/function.

(deleteDefinition def)

^

deleteAllDefinition

It eliminates all variables.

(deleteAllDefinition)

^

Vector

It creates a data vector.

(Vector val1 val2 /val3…/)

^

resetGlobal

It restitutes the global scene to the initial center position.

(resetGlobal)

^

setSwitch

Sets the switch in the active mixer.

(setSwitch ind func)

^

doOnAudioModo2

It activates the reaction to audio mode 2 if the active plugin has it.

(doOnAudioModo2 1o0)

^

doOnAudioModo1

It activates the reaction to audio mode 1 if the active plugin has it.

(doOnAudioModo1 1o0)

^

doOnEveryFrame

It activates the reaction on all frames if the active plugin has it.

(doOnEveryFrame 1o0)

^

setBlendMode

It sets the way in which the object is drawn, see blendModes in Flash 8.

(setBlendMode mode)

^

clearScreen

It clears the screen, if a color is chosen, it replaces the video’s colorBase.

(clearScreen /color/)

^

videoColorBase

It replaces the video’s colorBase.

(videoColorBase color)

^

clearTotal

It eliminates all plugins, models, gradients, variables, definitions and video operations, following this operations an image cleaning has to be set on every frame.

(clearTotal)

^

doExtrudeAnim

It displaces some coordinates with animation ext, extmin, limites and mult are xyz vectors.

(doExtrudeAnim /model, pasos, ext, extmin, limitinf, limitsup, mult /)

^

doExtrude

It displaces some coordinates ext, extmin, limites and mult are xyz vectors.

(doExtrude /model, ext, extmin, limitinf, limitsup, mult /)

Constantes

ACTIVE_PLUGIN: last plugin accesed.
ACTIVE_OBJECT: last object accesed.
ACTIVE_MODEL: last model accesed.
ACTIVE_GRADIENT: last gradient plugin accesed.
ACTIVE_FILTER: last filter plugin accesed.

easyMove, methods & properties

newPositions (tipo:FN): calculates new position and rotation.
enableRX posib mod (tipo:FN): enables rotX, possibility, modifier.
enableRY posib mod (tipo:FN): enables rotY, possibility, modifier.
enableRZ posib mod (tipo:FN): enables rotZ, possibility, modifier.
disableRX (tipo:FN): disables RX.
disableRY (tipo:FN): disables RY.
disableRZ (tipo:FN): disables RZ.
velocity (tipo:PROP): transition velocity.
doOnAudioModo1 (tipo:PROP): makes a newPosition at a onAudioModo1
doOnAudioModo2 (tipo:PROP): makes a newPosition at a onAudioModo2
doOnEveryFrame (tipo:PROP): makes a newPosition on each frame.

noiseMove, methods & properties

matrixDO [6 elementos 0,1] (tipo:FN) enables, disables rotation and movement axis on which it operates.

amp (tipo:PROP): noise amplifier in its position.
ampRot (tipo:PROP): noise amplifier in its rotation.
AMPisLinkedToAudio (tipo:PROP): if true, the amp is equal to the audio level times the multiplier.

AMPmultLinkedAudio (tipo:PROP): audio level multiplier.
AMPROTisLinkedToAudio (tipo:PROP): if true, the amprot is equal to the audio level times the multiplier.

AMPROTmultLinkedAudio (tipo:PROP): audio level multiplier

ParticleSystem, methods & properties

setGlow color alpha blurx blury strenght quality inner knockout (tipo:FN) adds glow to the particles.
setBlur blurx blury quality (tipo:FN): adds blur to the particles.
(tipo:PROP): position of the particle generator.
y (tipo:PROP): position of the particle generator.
z (tipo:PROP): position of the particle generator.
rx (tipo:PROP): rotation of the particle generator.
ry (tipo:PROP): rotation of the particle generator.
rz (tipo:PROP): rotation of the particle generator.
minX (tipo:PROP): minimum limit on velocidadX.
maxX (tipo:PROP): maximum limit on velocidadX.
acelX (tipo:PROP):acceleration on velocidadX.
magnetoX (tipo:PROP):magnetism on ejeX.
minY (tipo:PROP): minimum limit on velocidadY.
maxY (tipo:PROP): maximum limit on velocidadY.
acelY (tipo:PROP): acceleration on velocidadY.
magnetoY (tipo:PROP): magnetism on ejeY.
minZ (tipo:PROP): minimum limit on velocidadZ.
maxZ (tipo:PROP):maximum limit on velocidadZ.
acelZ (tipo:PROP): acceleration on velocidadZ.
magnetoZ (tipo:PROP):magnetism on ejeZ.
_max (tipo:PROP): maximum amuont of particles on screen.
_frecuencia (tipo:PROP): every how many frames it launches a particle.,
_grav (tipo:PROP): force of gravity.
minLife (tipo:PROP): minimum time of frame life.
maxLife (tipo:PROP): maximum time of frame life.
doOnAudioModo2 (tipo:PROP): it activates the generation of particles in audiomode2.
doOnAudioModo1 (tipo:PROP): it activates the generation of particles in audiomode1.
doOnEveryFrame (tipo:PROP): it activates the generation of particles on all frames.

Mixer, methods & properties

Mix (tipo:FN) it mixes between the switcher.
setSwitch ind func (tipo:FN): it sets a switch.
doOnAudioModo2 (tipo:PROP): it activates automatic reaction to audiomode2.
doOnAudioModo1 (tipo:PROP): it activates automatic reaction to audiomode1.
doOnEveryFrame (tipo:PROP): it activates automatic reaction on all frames.Signal, methods & properties

delay cant (tipo:FN): delay before executing an action, amount of events that have to take place before its executed.

doOnAudioModo2 (tipo:PROP): it activates automatic reaction to audiomode2.
doOnAudioModo1 (tipo:PROP): it activates automatic reaction to audiomode1.
doOnEveryFrame (tipo:PROP): it activates automatic reaction on all frames.

Model, methods & properties

setFreezable isFreezable modPorF (tipo:FN): if it suddenly freezes, modPorF is the posibilities of it freezing.

setFX1 do coordsFX1 modFacFX1 (tipo:FN): if it suddenly does fx1, coords is an array with an index of the coordinates that will be affected, modFac is a modifier, by default the value is 30.

setFX2 do modFacFX1 (tipo:FN): if it suddenly does fx2, modFac is a modifier, by default the value is 30.

setModColor do modA modB (tipo:FN): if every polygon of the object has a different tone, it simulates a shading, modA and modB determine the variation.

setBlendMode mode (tipo:FN): object drawing mode (screen, add, etc).

Multiuser Server

Tell text (tipo:FN): prints the text on the output to communicate something to the others.

login name (tipo:FN): logs on to the server with a certain name. Please check repeatedly in the output until the server confirms the log. Once logged the excecuted actions will also be excecuted by the rest of the connected users.

logout (tipo:FN): disconnects from the multi-user server.

Flaxus toplap flash based aplication Synthesis

Flaxus is a software developed to perform visual performances in real time under the TOPLAP manifest. Where the graphic piece is generated by code at the moment of its execution, bringing the artistic experience closer to the performance of music or dance. Flaxus incorporates and raises a new paradigm. It allows an executor to produce visuals according to something that is in situ, but that hundreds of other participants can see activated by individual audios in other corners of the planet. In this way the usual aesthetic value is inverted. The same visual composition becomes reactive to the individual musical perxeption. Promoting also the work in networks, Flaxus is a collaboratively operable tool, allowing the real-time realization of a piece between different executors through an internet connection.

Reason
We believe that the electronic visual arts need much more experimentation and deepening in their elemental search. Flaxus is a field work, a postulate. The anteposition of the visual performance to the musical. The use of networks to create reactive visual elements capable of reacting to different environments simultaneously. The experience of the collaborative from a distance. The programming of codes and interpretive processes in real time.
We seek with this piece to inspect the limits of the live visual experience.

Path
Some time ago we set out to carry out a research path in the electronic visual arts by exploring the performative capacity of these live. We sought to find some similarity in the act of playing a live musical instrument, and the generation of visual content.

We gave in the beginning with the act of Visual Jockey or VJ. A visual performant act in which an oparator launches images that accompany the auditory stimulus.

We find in the act of VJing something more similar to that of the DJ that does not compose but mixes. Although this mixture was made with material from his authorship, that was not what we were looking for.

There is a radical difference in the act of playing a guitar note to note live, to compose the melody, to determine at what moment something should be heard, although in its time, previously, one has recorded it note by note.
We occupied the same pose the act of VJ. The performance was not 100% live as it can be playing the piano.

We determined that it was important then to constitute a visual minimum unit. As well as the note in the music.

The drawback is that the music has only one data axis. The note is a wave that advances in a single direction, with different values ​​in time. The image contains much more data in its relationship to digital. For example, taking the screen that is flat as an example, it has 2 Cartesian axes that create the grid of pixels unlike the only axis of the audio.

To that we add the difficulty of the relationship between that plane and a fraction of visual time that in its aesthetic parameter according to an optimal fluids we could oscillate between 1/30 parts of a second to 1/12 parts of a second.

We saw then that the composition had to take place on the other hand and not by a metaphorical transference of the process of creation of the wave to the image that only gave for constructed different values ​​of a single color that enveloped the whole plane or some more complex variant that did not satisfied our expectations.

In order to constitute a metaphoric transference, and convert the image into a process similar to that of music, we could only determine in real time minimum data in the timeline, as on / off of certain elements, or graduations of color or form We find then an indirect relation between the fact of playing an instrument and writing a programming code.

Investigating this point we found the TOPLAP manifest .
This manifesto belongs to a current of musical origin that today already has a visual arm.

Said manifesto exposes the same problem and interest in performance expectations. And it solves it at the same point that we arrived at.
It raises a platform where the visual content is programmed (We define how to program the writing and structuring of code understood by the machine and that it translates into images). In this way the performant is forced to act constantly to maintain an aesthetic fluid in the image.

This formulation has greater similarities with the act of playing live music, and is perhaps a return to the slogans proposed by people like Varesse in the origins of the electronic musical act. In these origins one of the interests was to achieve a performance that is repeatable identically by electronic means. Since the musical performance in those times depended on many factors (musicians, place, conductor, public, etc). Many were the factors that determined the representation of the final work.
A little our interest is to return to that factor of pure error that is typical of the execution of a piece in the moment. The ephemeral of the artistic execution itself.

The TOPLAP manifestsubscribe to these requirements also, where the piece is ephemeral. The code produced is not saved and is lost after its execution.
Another premise of the manifesto is that the code is visible to the public, just as the hands of the guitarist in his exquisite movement are part of the act beyond the incomprehensible of these swings of the fingers.

We then proposed to create a software that corresponds to this manifesto in all its principles. This would not be the first software, even the selected name proves it.

There is a previous software, named Fluxus making memory of the homonymous artistic movement. Flaxus, the name chosen by us, makes use of the same stigma, but replacing the initial syllable with “Fla”, a habitual replacement in the software and on the web when the program has some kind of relationship with the Flash. What is the software in which we develop the application?

But we did not stop there, we proposed then to take a step forward with our tool.

Incorporate the reality of today’s networks. The concept and atomic decomposition that can be done through the internet.

Our software is constituted on a redefinition of the live visual execution.
This is the possibility that a person is generating visual schemes, composing live in a corner of the planet, and that another person can see it live elsewhere.
But not as passive transmition. Understand passive transmicion the passage of the entire data from one point to another without deformations greater than the loss of quality resulting from the same transimicion. For example the transimicion by television or radio.

The software has as a “core” or forger body a series of elements reactive to auditory stimuli.

In this way the visual composition is altered and comes into play with the music applied to it.

Our redefinition is to impose the image as the driving force of the audiovisual exercise. Allowing each user to apply the music they want.
In this way, if an executor is in a part of the world writing live code, in each place that this code is reinterpreted, it will react to the particular music or sound provided in the place and time regardless of what happens in each other. point where it is seen.

Then the visual aesthetic composition stops being strict and becomes flexible to the circumstance.  But this flexibility is not limited to the linear transmission of the communication in which there is an emitter and a receiver. Digital channels allow a circular performance, in their transmission channels.
What the flaxus contemplates is the collaboration of the performant experience.

There is no single performant, but the viewer can become performant and modify the piece outward, towards the rest of the receivers, possible issuers at any time.

The performance becomes a dialogue between many parties, the paradigm of public performance dissolves, and can even be reversed at times.
Thus we achieve that possible aesthetic error as the very basis of the system. In each place or moment the final perception of the execution can be completely different without altering the artistic composition of the piece.

Grammar
The logical structure of this programming language is based on a system of modular hierarchies. These are inserted one inside the other and resolve from the inside out, clearing unknowns in the same way that is done in mathematics.

Each structure usually has as its first element a verb and as a second a noun on which the verb operates followed in some cases by different adjectives or secondary verbs that operate on the noun)
These structures always go in parentheses.

Capacities
The Flaxus is an experimental conceptual tool. It is not a high performance software. It has a great educational value as it had at the time the software Design By Numbers originally created by Jhon Maeda at MIT or as it can have today the Processing created by Ben Fry (Broad Institute) and Casey Reas (UCLA Designs | Media Arts ).

The tool allows to create simple polygons in three dimensions in real time in a space reactive to sound.

It incorporates the use of particles in two and three dimensions.
Management of variables and mathematical calculations
Creation of real-time bitmaps

Different layer blending operations.
Use of textures, typographies and gradients generated in real time.
Direct operations on video and bitmap, this is Kerneling, Convolution Filters, Bitmap Copy, Video Feedback, etc.
All of these processes are executed in real time.

Technology
The entire software is built with Adobe Flash 8 . That despite having begun in its origins as an exclusive tool for the construction of animations today has a powerful language oriented to the aesthetic programming. Which, above all, allows us to perform bitmap operations in real time and be portable to any platform that supports the Adobe Flash 8 Player . In this way in a single development we could have a software that can run embedded in a web page and that is multiplatform.

Whats Next
In principle, our next plan is to increase the documentation and add examples.
We believe that the analysis of examples is one of the most robust and effective ways to understand a tool.
The software is still in beta, so from now on we will be correcting errors that we will obtain from the same community of users.

Our next act will be to implement a forum system so that the community can post questions, codes or scripts

Community
The software is made available to the community under the GNU GPL License
Sharing the code of this in case someone wants to modify or correct it, as well as its free use by any individual.