Integral Parallel Computation
 The starting point:  the actual intensive computation requires frequently more than one type of parallel computation, 
and even if it is only one this might not fit perfectly in the Flynn's taxonomy for parallel computation (SIMD, MIMD or MISD).
Instead of the Flynn-like structural taxonomy (involving data & programs) a function based taxonomy 
(involving variables & functions according to the Kleene's partial recursive functions model of computation) 
is proposed:
	-  Data parallel computation (a sort of SIMD in Flynn's taxonomy)
	
-  Time parallel computation (a very special case of MIMD in Flynn's taxonomy)
	
-  Speculative computation (the currently almost ignored MISD in Flynn's taxonomy)
because, on the one hand, there are languages where the distinction between data and programs does not work 
(for example in LISP or PROLOG), and, on the other hand,
any application of parallel computation involves a more or less complex function on a stream or on a vector of 
variables.
By definition Integral Parallel Computation involves all kind of parallel processes:
	-  Data parallel computation 
	
-  Time parallel computation supported by speculative computation
	
-  Input-output transferres transparent for the main computation
where the first two compute functions and the last transfers variables either inside the system between the
data parallel machine and the time parallel machine, or exchanges vectors or streams of variables with the external memory.
The  Connex Project  evolved toward the first implementation of an integral parallel machine containing:
	-  ConnexArrayTM  - a data parllel machine
	
-  Stream Accelerator - a time parallel machine with speculation support
	
-  IO Plan - an independent network used by ConnexArrayTM  to communicate with the external memory.
Thus, the Connex Project provides support for an integral parallel computation described as an integral
parallel architecture. Main colaborators: Bogdan Mitu, Dan Tomescu.
 Important comment:  parallel computation is ubiquitous 
Any performant computing machine must have am integral parallel architecture. The current implementations offer two extreme cases:
	-  complex integral parallel computation; for example: superscalar (data parallel) pipelined (time parallel)
 processors with speculative execution
	
-  intensive integral parallel computation; for example: the integral parallel architecture proposed in the 
 Connex Project .
The syntagma "parallel computation" starts to become obsolete because of the ubiquitousness of parallelism. Therefore: 
computation tends to manifest itself as complex (in general purpose machines) or as intensive 
(in supercomputing and in embedded co-processing).
 References 
	 [Stefan '00] Gh. Stefan:  "Parallel Architecturing Starting from Natural Computational Models",  in Proceedings of the Romanian Academy,  
Series A: Mathematics, Physics, Technical Sciences, Information Science, vol. 1, no. 3 Sept-Dec 2000.
	[Connex Technology '05] ***: "INTEGRAL PARALLEL COMPUTATION - The Connex Approach", internal report in 
Connex Technology Inc., 2005. 
	[Stefan '06] Mihaela Malita, Gheorghe Stefan, Marius Stoian:  "Complex vs. Intensive in Parallel Computation", 
in International Multi-Conference on Computing in the Global Information Technology - Challenges for the Next Generation of 
IT&C - ICCGI 2006
Bucharest, Romania, August 1-3, 2006
	[Stefan '06a] Gh. Stefan: "Integral Parallel Computation", in Proceedings of the Romanian 
Academy,  Series A: Mathematics, Physics, Technical Sciences, Information Science, vol. 7, no. 3 Sept-Dec 2006.
	[Stefan '06b]    Gheorghe Stefan: "The CA1024: SoC with Integral Parallel Architecture 
for HDTV Processing", invited paper at  4th International System-on-Chip (SoC) 
Conference & Exhibit, November 1 & 2, 2006, Radisson Hotel Newport Beach, CA. 
	[Malita '07] Mihaela Malita, Gheorghe Stefan, Dominique Thiebaut: "Not Multi-, but Many Core: Designing Integral Parallel 
Architectures for Embedded Computation" in International Workshop on Advanced Low Power Systems held in conjunction with 21st International 
Conference on Supercomputing June 17, 2007 Seattle, WA, USA. 
	[Malita '08]	Mihaela Malita, Gheorghe Stefan: "On the Many-Processor Paradigm", in: H. R. Arabina (Ed.): 
Proceedings of the 2008 World Congress in Computer Science, Computer Engineering and Applied Computeing, 
vol. PDPTA'08 (The 2008 International Conference on Parallel and Distributed Processing Techniques and Applications), 2008.