1 Iniciativas GRID en el CSIC Valencia, 15 de Julio de 2008 Jesús Marco de Lucas Profesor de Investigación del CSIC Instituto de Física de Cantabria Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt
2 Outline Why Grid in CSIC? An impressive record track: Data Grid times and CrossGrid LHC Computing Grid & EGEE Interactive European Grid i2g EGEE-III, DORII, EUFORIA GRID-CSIC NGI, EGI
3 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose SaltCollaboration An increasing problem: fragmentation of knowledge Too many fields Large information Complex modelling Why collaboration is so important? Projects: big success in Industry and in Science Add Globalization… Why collaboration is so difficult? who were Newton collaborators? How do you understand collaboration for Engineers Scientists How can we support collaboration in the (post)-Internet era?
4 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Collaboration: my experience as researcher My experience as researcher: Physicist with (some) computing background and (some) maths background and (some) electronics background and NO management/collaboration background Working ALWAYS in medium (>10)-large (
5 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt An increasing problem: fragmentation of knowledge Too many fields Large information Complex modeling Why collaboration is so important? Projects: big success in Industry and in Science Add Globalization… Why collaboration is so difficult? who were Newton collaborators? How do you understand collaboration for Engineers Scientists How can we support collaboration in the (post)-Internet era? Collaboration: any answer? Join [distributed/multidisciplinary] forces to make a project REAL Collaborative & managerial tools share resources in an open frameworkshare resources in an open framework support interactionsupport interaction recognize efforts and contributionsrecognize efforts and contributions get REAL added valueget REAL added value Where & What for was the WEB born?
6 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt What does this man do here?
7 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt A good example: Flood management A good example: Flood management Problem: Flooding Crisis in Slovakia Solution: Monitoring Forecasting Simulation Real-time actions
8 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Flood Management: Data, Models, Simulations Flood Management: Data, Models, Simulations Precipitation forecasts based on meteorological simulations of different resolution from the meso-scale to the storm-scale. For flash floods, high-resolution (1 km) regional atmospheric models have to be used along with remote sensing data (satellite, radar) From the quantitative precipitation forecast, hydrological models are used to determine the discharge from the affected area. Then hydraulic models simulate water flow through various river structures to predict the impact of the flood
9 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Computing intensive science Science is becoming increasingly digital and needs to deal with increasing amounts of data Simulations get ever more detailed Nanotechnology – design of new materials from the molecular scale Modelling and predicting complex systems (weather forecasting, river floods, earthquake) Decoding the human genome Experimental Science uses ever more sophisticated sensors to make precise measurements Need high statistics Huge amounts of data Serves user communities around the world
10 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt INTERACTIVE EUROPEAN GRID i2g Provide an advanced grid empowered infrastructure for scientific computing targeted to support demanding interactive and parallel applications. Provide services to integrate computing resources into a grid Coordinate the deployment, maintenance and operation of the grid infrastructure Provide support for Virtual Organizations and resource providers Coordinate resource providers and virtual organizations Provide a development infrastructure for research activities Test and validation of new middleware components Ensure adequate network support
11 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Grid Infrastructure 12 sites 7 countries ~ 1000 COREs Xeon Opteron Pentium ~ 77 TB of storage Resources shared with other infrastructures ~10 FTE Grid Operations Management
12 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Grid Infrastructure Two sets of sites Production 9 sites Development 4 sites DEVELOPMENT PRODUCTION
13 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Services 12 sites 3 management centres Core services Distributed services Taking advantage of the partners expertise Redundancy Better use of resources Production Core Services CrossBroker RAS BDII VOMS LFC MyProxy Production Core Services CrossBroker RAS BDII VOMS LFC MyProxy APEL accounting GridICE R-GMA Development Core Services CrossBroker RAS BDII VOMS LFC MyProxy Pure gLite WMS Autobuild Repository R-GMA for development SAM Network monitoring Security coordination
14 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Capacity CPU capacity is higher than technical annex Storage is higher than technical annex SiteCPU capacity SIOnline storage TB Tech annexActualTech annexActual LIP27,60053,5601.51.6 TB IFCA243,1001,036,2242021.2 TB CESGA24,10068,000530 TB IISAS38,40044,8000.20.5 TB PSNC84,700193,200515 TB BIFI33,20038,44020.1 TB CYFRONET18,90020,79011.0 TB ICM50,20049,9001.51,5 TB FZK40,000181,1921.51.8 TB UAB13,00012,0820.360.4 TB TCD17,70011,9280.44.0 TB GUP6,60010,8260.50.5 TB TOTAL597,5001,720,9423977.6 TB
15 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Overall usage Job submissions by job type
16 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Usage sites and users Jobs per sites Registered users
17 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt VO usage Most active application VOs: ienvmod ihep ibrain ifusion iplanck iusct
18 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Added value: CrossBroker MPI and interactive job scheduling Schedule MPI jobs with a gLite compatible broker Decoupled from the MPI implementation Enables MPI inside clusters and across clusters Select the best possible site or set of sites for running Support for policy “extensions” in the information system Enable interactivity transparently Built-in support for i2g visualization and steering mechanisms Priority for interactive jobs Flexible support for interactive agents Fast application start-up Glide-ins for application fast startup execution Agents are submitted together with jobs to enable injection of interactive applications on cluster nodes Heart of the i2g workload management
19 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Added value: CrossBroker Standard gLite cluster i2g MPI cluster gLite catalogue i2g MPI cluster CrossBroker gLite Info Index gLite Myproxy User Interface Migrating Desktop RAS i2g MPI cluster Sequential Parallel Sequential Parallel OpenMPI PACX-MPI User Friendly GUI
20 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Added value: CrossBroker CrossBroker User Interface Migrating Desktop RAS WN LRMS Batch job + glide-in BatchInteractive CE i2g Interactive Job
21 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Added value: MPI MPI support in gLite based grids Enable a gLite cluster to run MPI jobs properly MPI_START (developed by i2g) Common layer to handle MPI application startup at the cluster level Hide cluster and MPI implementation details from the user Provide hooks for application management Now adopted by EGEE and other infrastructures as the method to start MPI applications MPI OpenMPI implementation with excellent characteristics for grid computing (modular) PACX-MPI running jobs across sites Debug tools integrated in the i2g framework Marmot MPI-Trace Support for MPI in PBS and SGE grid clusters CE and LRMS changes and configuration
22 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Added value: MPI CrossBroker User Interface Migrating Desktop RAS WN LRMS MPI JOB Parameters for MPI_START MPI_START CE i2g MPIEXEC Application MPI_START aware CrossBroker is MPI implementation independent Encapsulation of: MPI implementation LRMS and cluster details Injection of MPI_START is also possible for sites without MPI_START
23 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Added value: Interactivity Take the control of your application Application steering and graphical visualization Powerful graphical visualization (GVID) Application steering while running remotely (GVID) Support for OpenMPI and PACX-MPI applications All from an easy to use desktop (Migrating Desktop) Interactive terminal i2glogin and glogin SSH like access fully compatible with gLite and GSI security Secure, low-latency, bi-directional connectivity Excellent for debugging and working remotely Used to tunnel GVID and application steering Can be used to tunnel other applications and data
24 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Added value: Tools Assist in the infrastructure management Accounting Accounting portal Support for jobtype, parallel type, interactive type accounting Support for MPI accounting Collect data from APEL and Resource Brokers Monitoring SAM (Service Availability Monitoring) tests development PACX-MPI OpenMPI Interactivity i2g software versions VO specific tests Other tests and tools Verify SSH passwordless connectivity for MPI Improve reliability Replication methods for VOMS and LFC
25 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Advanced strategies for interactive jobs Immediate execution Jobs either start immediately or fail Implemented for SGE and in production at LIP Faster application startup at the CE level with pbs jobmanager instead of LCG jobmanager Prioritization of jobs Method to preempt batch jobs Tested in PBS
26 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Interoperability i2g is committed to interoperability Fundamental to: Enlarge the infrastructure and grab users Enable easier porting Share resources Study infrastructure interoperability Define deployment scenarios for the deployment of i2g middleware on top of gLite Enable other VOs and sites to use i2g developments: Enabling users from other VOs to access i2g resources Enabling sites from other infrastructures to join the i2g infrastructure Enabling other projects or VOs to deploy the i2g developments on gLite based infrastructures Involvement with national grid initiatives
27 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Interoperability gLite WN I2G CrossBroker LFC REGISTRY Top-BDII lcg-RB Top-BDII LFCREGISTRY Int.EU.Grid Infrastructure EGEE Infrastructure Batch Server LCG-CE gLite WN I2G WN software gLite WN I2G WN software MonBox Site-BDII SE UI Local Services I2G UI LCG-CE I2G CE software MonBox Site-BDII Batch Server I2G WN software gLite WN I2G WN software gLite WN I2G WN software gLite WN SE MPI, Visualization I2G UI software I2G CE software I2G WN software Migrating Desktop
28 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Development Support Savannah at FZK Repository SVN+CVS Bugtracker Autobuild SL3 and SL4 Development testbed Developers guide Middleware Validation
29 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt From development to deployment Development support Development guidelines Repositories Autobuild Development infrastructure Integration Packaging Installation scripts Integration in release Validation Verify installation Test functionalities Deployment Coordinate the sites Ensure proper deployment Source repository Autobuild Development testbed Production Infrastructure Validation Development repository Validation repository Production repository
30 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt User and site support Wiki Web pages SA1 mailing lists VRVS Support team Contributions from all partners Contribution from JRA1 and NA3 Drop Helpdesk tool, not much used and not well accepted Concentrate on wiki Concentrate on mailing lists for support
31 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Security Authentication based on IGTF CAs Good contacts with national CAs Coordination with EUgridPMA Authorization based on VOMS Fault tolerant setup Security policies Follow JSPG policies VOs can have more strict policies Active security Developed and tested distributed IDS Incident and vulnerability management Tracking vulnerabilities Security contacts Coordination in case of intrusion
32 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Grid Operations Management Infrastructure management Tasks coordination Sites coordinating Services coordination Coordination with other activities Coordination with VOs Ensure the quality SAM: I2G specific tests OpenMPI, PACXMPI, interactivity… VO specific tests site notifications GridICE: monitoring for Production and development separate R-GMA infrastructures Accounting: job type analysis, MPI accounting, … getting information from brokers
33 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Conclusions The i2g infrastructure was successful: Supported multiple applications from multiple domains Showed that: MPI and interactivity can be well supported in grids There is a wide range of applications that can be supported with grid computing instead of traditional HPC Interoperability is possible and is a desired feature It can be done on top of gLite but requires time, effort, dedication, patience and a very good and supportive team
34 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Conclusions i2g achievements and legacy: Middleware components enabling: Interactivity and visualization Parallel computing support User friendly access to the infrastructure All on top of gLite Procedures and methods to: Enable interoperability across infrastructures Enable MPI and interactivity with gLite grids Immediate job execution Experience in deploying and running infrastructures for: both sequential / parallel batch / interactive Tools to assist in the operation of such infrastructures Example for others to follow A production infrastructure that is fully operational !
35 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Applications porting and development MPI support Two levels of support for MPI applications Support to already existing MPI applications Compiler support issues Infrastructure oriented services Application specific Modify serial applications to be used in the grid environment Parametric simulations (sweeping over parameter spaces) Intra-cluster versus Inter-cluster MPI It is a question of latencies Some applications can be adapted to work in such environment
36 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Applications porting and development intra-cluster MPI support Compiler support glite offers only limited support to Fortran (F77) We have extended the support to F90 (Intel Compilers) to avoid static compilations Applications in F90 are very spreaded. Infrastructure oriented support Low latency Infiniband cluster integrated with gLite Flags for detailed hardware configuration Application oriented support (see demo session) Scripts and hooks have been developed for parametric MPI simulations
37 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Applications porting and development Demos of OpenMPI support inteugrid capabilities employed OpenMPI support Interactivity Monitoring simulation progress Using SEs to store output Using mpi hooks to upload simulation input Reacflow Simulation of large scale explosions (eg. Hydrogen-air mixtures) Sequential Version already existing European Commision Joint Research Centers at Ispra and Petten. C++ and Fortran 77 MPI Parallelisation has been done as a joint effort between JRC Petten and GUP Linz Adaptive Mesh Refinement Dynamic Load Balancing
38 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Applications porting and development PACX-MPI support Spin Glass using Parallel Tempering Simulation of Heisenberg Spin Glass Many replicas of the same system need to be simulated: MPI distributed The temperature of the replicas is controlled and set periodically for all of them by a master process: Parallel Tempering algorithm. Intensity-Modulated Ration Therapy Distribution of the Montecarlo Simulations for optimization of radiation dose using MPI. All MPI processes are independent
39 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Applications porting and development Interactivity + Visualization Evolution of Pollution clouds In the atmopshere Uses: Open MPI Interactivity Visualization Integrated in MD Visualization of Gaussian runs Uses: Interactivity Visualization Visualization of Plasma in Fusion Devices Uses: Open MPI Interactivity Visualization Integrated in MD
40 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Applications porting and development Interactivity with Gridsolve Ultrasound Computing Tomography Method for breast cancer detection Data are taken by an Ultrasound scanner The method is based on image reconstruction from the data User Requirements Matlab environment Speed up algorithm development Resource gathering Using Gridsolve Inteugrid middleware is used to send gridsolve agents (pilot jobs) to WNs Integrated within Migrating Desktop
41 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Applications porting and development Fast Job Allocation with Glidein Fast Job Allocation using Glidein Glidein provides a mechanism to share computing resources on a per VO basis If all the resources of a VO are occupied (no free CPUs) The user can still submit an interactive job and get inmediately a CPU shared with a batch job of the same VO Analysis of water Quality in reservoirs Uses: Interactivity Visualization
42 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Future and Sustainability Strong points of the inteugrid infrastructure from the user point of view Reliable support for MPI parallel jobs on the grid Support for interactivity: makes daily work easier Speeds up development work Easy and direct access to grid resources for test purposes Support for pilot jobs, enabling Grid RPC via GridSolve The support for applications developed in inteugrid is being used in FP7 projects DORII Deployment of Remote Infrastructures Euforia EU for ITER Applications
43 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Origen: experiencia del CSIC en proyectos GRID área de posible colaboración con el CNRS Oportunidad (iniciativa nacional de e-Ciencia, NGI, EGI) impulso conjunto desde VORI-VICYT Objetivo: poner en marcha una infraestructura avanzada de computación distribuida que permita realizar proyectos de investigación que requieren capacidades que no están al alcance de un solo usuario o grupo de investigación. En particular se espera potenciar proyectos multidisciplinares o entre varios centros en los que los investigadores necesitan simular, analizar, procesar, distribuir o acceder a grandes volúmenes de datos. Ejemplos (e-Ciencia): Experimentos de Física de Partículas (CDF, CMS, ATLAS, ILC…) Fenomenología (Modelos SUSY) y Lattice Misiones Espaciales (XMM, Planck…) Observaciones Astronómicas Modelado del Cambio Climático Química computacional Biocomputación Proyecto GRID-CSIC
44 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Bases del proyecto El proyecto está basado en la utilización de tecnología Grid, que permite compartir y acceder a recursos distribuidos geográficamente de forma transparente. En particular se propone utilizar un software intermedio, o middleware, que permita la interoperabilidad con infraestructuras Grid Europeas, cómo la del proyecto EGEE y la del proyecto i2g (este coordinado por el CSIC). En particular la infraestructura desarrollada podrá ser compartida con la iniciativa IberGrid en desarrollo con Portugal, y con la infraestructura del Institut des Grilles del CNRS en Francia, con la que se establecerá un acuerdo de colaboración. El proyecto implica el desarrollo de una capacidad total de computación estimada de unos 8.000 procesadores y de una capacidad de almacenamiento on-line de 1.000 Terabytes (1 Petabyte). Esta infraestructura se pondrá en marcha en tres fases a lo largo de un periodo de tres años (2008, 2009, 2010): En el primer año la fase piloto incluirá tres centros que cuentan ya con experiencia en este tipo de proyectos (IFCA, IFIC, e IAA) La segunda fase de extensión incluirá centros en Madrid y Cataluña. Por último la fase de consolidación completará el mapa de cobertura a nivel nacional.
45 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose SaltEstructura El proyecto consta de tres áreas de trabajo: Infraestructura instalación y operación del equipamiento informático, y su integración en el entorno Grid. Aplicaciones y desarrollo apoyará la adaptación de las mismas y del software específico Coordinación del proyecto gestión, organización interna y difusión. Equipo Inicial: CentroPersonal y Experiencia en el Área IFCA Instituto de Física de Cantabria Personal en plantilla Jesús Marco, Isabel Campos, Rafael Marco, Celso Martínez Rivero Línea de Investigación en Computación Distribuida (GRID) Personal contratado: Iban Cabrillo, Pablo Orviz, Álvaro López, Irma Díaz IFIC Instituto de Física Corpuscular Personal en plantilla José Salt, Santiago González,Javier Sánchez Línea de Investigación en GRID Personal contratado: Gabriel Amorós IAA Instituto de Astrofísica de Andalucía Personal en plantilla: José Ruedas, Wilfredo More Personal contratado:
46 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose SaltActividades Actividades previstas: ÁreaActividadDescripción y Objetivos Específicos INFRA- ESTRUCTURA Instalación e Integración en Grid de la infraestructura de cada Centro CPD. Instalación de los servicios locales de computación, incluyendo sistema de colas (PBS, SGE, MOAB), sistema de almacenamiento compartido (GPFS, Lustre), archivado, etc. Instalación de los servicios Grid sobre servidores dedicados (CE, SE, BDII, etc) Instalación y Operación de los Servicios Globales Servicios Globales de Infraestructura: Resource Broker, Monitorización y Accounting Servicios Globales de Soporte de Usuarios: VOMS, HelpDesk, Web/Wiki/Repositorios, etc. APLICACIONES Y DESARROLLO Integración de Aplicaciones Análisis de las aplicaciones y sus requerimientos. Mejora en entorno cluster en acceso a datos y paralelización. Adaptación a un entorno GRID. Adaptación y Desarrollo de software Estudio de los paquetes existentes. Desarrollo de soluciones específicas para mejorar el soporte de la infraestructura o la integración de aplicaciones COORDINACIÓN Gestión del proyecto Gestión técnica, científica y de recursos. Elaboración de informes. Coordinación con otras iniciativas en el área. Difusión Organización de seminarios de presentación y cursos de introducción. Elaboración de material de difusión (web, folletos, etc.)
47 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose SaltPlanificación Hito / Resultado DescripciónCentrosFechaEstimación de Recursos Necesarios INFRA.1Plan Detallado de Infraestructura (acondicionamiento extra CPD y preparación de concursos) IFCA, IFIC, IAA M1 (Abril 08) APDES.1Lista de aplicaciones piloto (a integrar en el primer año) y requerimientos IFCA, IFIC, IAAM1 (Abril 08) COORD.0Informes trimestralesIFCAM3 (Junio 08) …M36 COORD.1Plan de difusión (calendario, acciones)IFCA, IFIC, IAAM3 INFRA.2Puesta en marcha de recursos locales primer año (1300x3 proc + 168x3 TB) IFCA, IFIC, IAA M6 (Sept 08) INFRA.3Integración en servicios globales GRID (IFCA e IFIC)IFCA, IFICM7 (Oct 08) INFRA.4Puesta en marcha de servicios globales GRID (IFCA e IFIC), integración (IAA). Conexión Red. IFCA, IFIC,IAAM9 (Dic 08) APDES.2Aplicaciones piloto ejecutándose en los tres centrosIFCA, IFIC, IAAM9 (Dic 08) APDES.3Software Soporte Aplicaciones PilotoIFCA, IFIC, IAAM9 (Dic 08) COOR.2Informe Primer Año DECISION: Extensión ProyectoIFCAM12 (Mar 09)
48 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Planificación II Hito / Resultado DescripciónCentrosFechaEstimación de Recursos Necesarios INFRA.5Plan detallado extensión a tres centros másIFCA, CTI, …M13 (Abr 09) APDES.4Lista de nuevas aplicaciones piloto (a integrar en el segundo año) y requerimientos IFCA, CTI…M13 (Abr 09) APDES.5Lista de nuevas aplicaciones avanzadas (a integrar en el segundo año) y requerimientos IFCA, IFIC, IAAM13 (Abr 09) INFRA.6Puesta en marcha de recursos locales segundo año (650x3 proc + 84x3 TB) IFCA, CTI…M18 (Sept 09) INFRA.7Integración en servicios globales GRID nuevos recursosIFCA, CTI…M19 (Oct 09) APDES.6Aplicaciones piloto ejecutándose en los tres nuevos centros, Aplicaciones avanzadas ejecutándose. IFCA, CTI…M21 (Dic 08) APDES.7Adaptaciones y Desarrollo de Software para Aplicaciones Avanzadas IFCA, IFICM21 (Dic 08) COOR.3Informe Segundo Año DECISION: Extensión ProyectoIFCAM24 (Mar 09)
49 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Estado actual Equipamiento primer año adquirido: IFCA: Computación: –IBM blades, 182 (dual quad: 1456 cores) –70 + 14 con Infiniband –Conexiones a red 3 x 10G Almacenamiento: –Cabinas Discos SATA (~175 Terabytes) –4 servidores GPFS IFIC Computación: –HP + DELL Almacenamiento: –SUN IAA Computación: –servidores IBM x3850 M2, con tecnología de 4ª generación X-Architecture, que permite escalar desde 4 hasta 16 procesadores (Intel Quad Core Xeon X7350), y hasta 1TB de memoria RAM en la configuración de 16 procesadores Almacenamiento: –DELL Instalación debe estar finalizada en Septiembre Contratos de personal (1 titulado superior + 1 doctor) en marcha en Septiembre Contacto con el CNRS establecido Próxima edición del curso Grid y e-Ciencia en Santander (en el marco de la UIMP)
50 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose SaltInteroperabilidad La infraestructura GRID-CSIC mantendrá la interoperabilidad con otras infraestructuras existentes de computación Grid, en particular con la de los proyectos europeos Interactive European Grid ( i2g ), EGEE-II, DORII, EUFORIA, y la de los proyectos nacionales Tier-2 de las colaboraciones ATLAS y CMS Así como con la nueva iniciativa Grid nacional dentro de la Red Española de e-Ciencia, en la que el CSIC tiene un papel relevante (coordinando la infraestructura Grid) Se espera además que esta iniciativa permita al CSIC participar de modo directo en la futura Infraestructura Grid Europea (EGI).
51 IBERIC INFRASTRUCTURE COMMON PLAN FOR DISTRIBUTED COMPUTATION R.Gavela, IBERGRID Conference COLLABORATION BETWEEN SPAIN AND PORTUGAL EGEE. 14 CENTERS IN SUDWEST FEDERATION. COLLABORATION IN SA1 (INFRAESTSRUCTURE), SA3 (MIDDLEWARE), NA2, NA3 (DISEMINATION AND TRAINING) AND NA4 (APLICATIONS) WLSG. TIER 1 (PIC), 3 TIER 2 IN SPAIN AND 1 TIER 2 IN PORTUGAL EELA CORE.GRID TORGA.NET CYTED GRID CROSSGRID INTERACTIVE EUROPEAN GRID COMMUNICATIONS. CONEXIONS IN GALICIA AND EXTREMADURA FRONTIERS. SANTIAGO TRUJILLO
52 IBERIC INFRASTRUCTURE COMMON PLAN FOR DISTRIBUTED COMPUTATION R.Gavela, IBERGRID Conference GENERAL IDEAS FOR GRID COLLABORATION WIDE OPEN AND EFFECTIVE COLLABORATION STRATEGIC ALLIANCE IN UE COMMON INFRASTRUCTURE FIRSTLY BASED ON EGEE AND EELA STANDAR POWERFUL COMMON COMMUNICATIONS NETWORK COORDINATION REDIRIS-RCTS POSSIBLE SPECIFIC GRID NETWORK ORGANIZED STRUCTURE OF RESOURCES USERS CERTIFICATION RESOURCE CENTERS SUPPORT SECURITY MONITORING AND CONTROL APLICATIONS PUSH COMMON VIRTUAL ORGANIZATIONS SELECT APPROPRIATE COMMON APLICATIONS INFORMATION AND TRAINING TAKE ADVANTAGE OF COMMON INITIATIVES RESEARCHERS MOBILITY COORDINATE NATIONAL PLANS AND PROMOTE BILATERAL COOPERATION
53 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Próximos pasos … GRID-CSIC LHC Computing GRID Tier-2 Tier-3 DORII, EUFORIA Red Temática de e-Ciencia en España Grupo Infraestructura Grid + middleware + aplicaciones Contribución a EGI Participación en el Policy Board Para lo que queráis, marco [at] ifca.unican.es Venid a Santander y visitadnos en el IFCA! Buen sitio para surfear!
54 Thanks to: Jorge Gomes, Isabel Campos, Rafael Marco, Jose Salt Consejos (vendo y para mí no tengo) La clave de un proyecto es tener un objetivo claro y real y querer realizarlo Tener una buena idea Evaluar los recursos realmente necesarios Establecer un buen equipo de trabajo, complementario La clave de la e-Ciencia es la colaboración La colaboración REAL es la que posibilita la famosa “sinergia” La colaboración REAL además hace muy fácil la “multidisciplinariedad” La clave de los problemas de colaboración son los de siempre Competencia, bien o mal entendida, por recursos Económicos, de infraestructura, humanos Estructuras de “gestión” interna inadecuadas No se reconocen las aportaciones Imposiciones externas o internas sin peso cientifico o tecnológico Cerradas, que no permiten evolucionar Cuando la colaboración funciona, la experiencia es fantástica Mi experiencia en el CERN Mi experiencia en varios proyectos Grid