Lighting independence and automatic learning of the objects of recognition are still the main „stumbling blocks“ of computer-based image understanding. This applies to very many areas of application, the „countermeasures“ applied are usually simplification and standardization – the image understanding system is restricted to predefined and controlled environments, and therefore the interplay of flexibility and robustness in recognition has very often not gotten the attention it should have. The main application area for this work is robot soccer, where we mainly find color-based recognition tasks. These have also been simplified – until very recently – by defining strict lighting conditions. These restrictions are now slowly being lifted, which leads us to the need to present solutions that can adapt to changing lighting conditions as well as automatically learn the objects of recognition. This work is based on a vision system previously designed by the author which has proven to be flexible and quite robust. At the same time it has set marks regarding ease of use, adaptive power and extensibility. Its competitiveness has been verified with the team using it gaining 2nd place in the FIRA RoboWorld Cup, the world championship in robot soccer. Yet some issues with the system and its design remain. These, together with recent trends in robot soccer in opening the previously strongly restricted and controlled environments, motivate this work. In detail, the “cornerstones” are variability, generality, reliability, autonomy and feasibility given the image understanding task at hand. Shortly put, this work has its focus on automatic and lighting-tolerant color-based recognition of moving objects, mainly targeted at the robot soccer global vision domain but with applications in many other surveillance-related areas.