Filtrar por:
Tipo de publicación
- Event (4582)
- Artículo (830)
- Tesis de maestría (475)
- Tesis de doctorado (318)
- Dataset (250)
Autores
- Servicio Sismológico Nacional (IGEF-UNAM) (4582)
- Fernando Nuno Dias Marques Simoes (250)
- WALDO OJEDA BUSTAMANTE (40)
- AMOR MILDRED ESCALANTE (32)
- IVÁN GALICIA ISASMENDI (32)
Años de Publicación
Editores
- UNAM, IGEF, SSN, Grupo de Trabajo (4582)
- Cenoteando, Facultad de Ciencias, UNAM (cenoteando.mx) (249)
- Instituto Mexicano de Tecnología del Agua (207)
- Instituto Tecnológico y de Estudios Superiores de Monterrey (104)
- Universidad Autónoma de San Luis Potosí (85)
Repositorios Orígen
- Repositorio de datos del Servicio Sismológico Nacional (4582)
- Repositorio institucional del IMTA (571)
- Cenotes de Yucatan (250)
- COLECCIONES DIGITALES COLMEX (199)
- Repositorio Institucional NINIVE (186)
Tipos de Acceso
- oa:openAccess (6913)
- oa:embargoedAccess (9)
- oa:Computación y Sistemas (1)
Idiomas
Materias
- Sismología (13746)
- CIENCIAS FÍSICO MATEMÁTICAS Y CIENCIAS DE LA TIERRA (5150)
- CIENCIAS DE LA TIERRA Y DEL ESPACIO (4631)
- GEOFÍSICA (4585)
- SISMOLOGÍA Y PROSPECCIÓN SÍSMICA (4584)
Selecciona los temas de tu interés y recibe en tu correo las publicaciones más actuales
Francisco Pinto Matthew Paul Reynolds Robert Furbank (2024, [Artículo])
Deep Learning Object-Based Image Analysis Optical Imagery CIENCIAS AGROPECUARIAS Y BIOTECNOLOGÍA AGRICULTURE IMAGE ANALYSIS PLANT BREEDING REMOTE SENSING MACHINE LEARNING
Xu Wang Sandesh Kumar Shrestha Philomin Juliana Suchismita Mondal Francisco Pinto Govindan Velu Leonardo Abdiel Crespo Herrera JULIO HUERTA_ESPINO Ravi Singh Jesse Poland (2023, [Artículo])
New Crop Varieties Plant Breeding Programs Yield Prediction CIENCIAS AGROPECUARIAS Y BIOTECNOLOGÍA LEARNING GRAIN YIELDS WHEAT BREEDING FOOD SECURITY
Multimodal deep learning methods enhance genomic prediction of wheat breeding
Carolina Rivera-Amado Francisco Pinto Francisco Javier Pinera-Chavez David González-Diéguez Matthew Paul Reynolds Paulino Pérez-Rodríguez Huihui Li Osval Antonio Montesinos-Lopez Jose Crossa (2023, [Artículo])
Conventional Methods Genomic Prediction Accuracy Deep Learning Novel Methods CIENCIAS AGROPECUARIAS Y BIOTECNOLOGÍA WHEAT BREEDING MACHINE LEARNING METHODS MARKER-ASSISTED SELECTION
Statistical machine-learning methods for genomic prediction using the SKM library
Osval Antonio Montesinos-Lopez Brandon Alejandro Mosqueda González Jose Crossa (2023, [Artículo])
Sparse Kernel Methods R package Statistical Machine Learning Genomic Selection CIENCIAS AGROPECUARIAS Y BIOTECNOLOGÍA MARKER-ASSISTED SELECTION MACHINE LEARNING GENOMICS METHODS
Multi-environment genomic prediction of plant traits using deep learners with dense architecture
Osval Antonio Montesinos-Lopez Jose Crossa (2018, [Artículo])
Shared Data Resources Deep Learning Genomic Prediction CIENCIAS AGROPECUARIAS Y BIOTECNOLOGÍA ACCURACY GENOMICS NEURAL NETWORKS FORECASTING DATA MARKER-ASSISTED SELECTION
Multi-trait, multi-environment deep learning modeling for genomic-enabled prediction of plant traits
Osval Antonio Montesinos-Lopez Jose Crossa Francisco Javier Martin Vallejo (2018, [Artículo])
Deep Learning Genomic Prediction Bayesian Modeling Shared Data Resources CIENCIAS AGROPECUARIAS Y BIOTECNOLOGÍA BAYESIAN THEORY RESOURCES DATA BREEDING PROGRAMMES
A line follower robot implementation using Lego's Mindstorms Kit and Q-Learning
VICTOR RICARDO CRUZ ALVAREZ ENRIQUE HIDALGO PEÑA HECTOR GABRIEL ACOSTA MESA (2012, [Artículo])
Un problema común al trabajar con robots móviles es que la fase de programación puede ser un proceso largo, costoso y difícil para los programadores. Los Algoritmos de Aprendizaje por Refuerzo ofrecen uno de los marcos de trabajo más generales en el ámbito de aprendizaje de máquina. Este trabajo presenta un enfoque usando el algoritmo de Q-Learning en un robot Lego para que aprenda "por sí mismo" a seguir una línea negra dibujada en una superficie blanca. El entorno de programación utilizado en este trabajo es Matlab.
INGENIERÍA Y TECNOLOGÍA Algoritmos de aprendizaje reforzado Q-learning (Algoritmo de aprendizaje reforzado) Lego Mindstorms (Robótica) Matlab Reinforcement learning algorithms Q-Learning (Reinforcement learning algorithm) Lego Mindstorms (Robotics) Matlab
Distance learning for farmers: Experience during the pandemic
Andrea Gardeazabal (2023, [Documento de trabajo])
In response to the COVID-19 pandemic's disruption of farmer training—a crucial component for enhancing the resilience and livelihoods of smallholder farmers—CIMMYT innovated educational solutions to sustain capacity building in agri-food systems. Addressing the challenges of limited mobile device access, poor internet connectivity, and digital illiteracy, CIMMYT implemented two pilot projects in Mexico. These projects facilitated distance learning for adult farmers in rural areas, employing both internet-based and non-internet methods. The non-internet approach utilized traditional media like print, while the internet-based approach leveraged WhatsApp for educational content delivery. Building on these experiences, CIMMYT expanded its offerings by creating micro -courses delivered through WhatsApp, hosted on the Co-LAB's new Learning Network platform, specifically targeting farmers. This paper delves into the various strategies, methods, and techniques adopted, documenting the learning outcomes, results, and key conclusions drawn from these innovative training initiatives.
Distance Learning Digital Inclusion Innovative Training CIENCIAS AGROPECUARIAS Y BIOTECNOLOGÍA DISTANCE EDUCATION CAPACITY DEVELOPMENT METHODS COMMUNICATION TECHNOLOGY
Control de sistemas usando aprendizaje de máquina
Systems control using machine learning
Jesús Martín Miguel Martínez (2023, [Tesis de maestría])
El aprendizaje por refuerzo es un paradigma del aprendizaje de máquina con un amplio desarrollo y una creciente demanda en aplicaciones que involucran toma de decisiones y control. Es un paradigma que permite el diseño de controladores que no dependen directamente del modelo que describe la dinámica del sistema. Esto es importante ya que en aplicaciones reales es frecuente que no se disponga de dichos modelos de manera precisa. Esta tesis tiene como objetivo implementar un controlador óptimo en tiempo discreto libre de modelo. La metodología elegida se basa en algoritmos de aprendizaje por refuerzo, enfocados en sistemas con espacios de estado y acción continuos a través de modelos discretos. Se utiliza el concepto de función de valor (Q-función y función V ) y la ecuación de Bellman para resolver el problema del regulador cuadrático lineal para un sistema mecánico masa-resorte-amortiguador, en casos donde se tiene conocimiento parcial y desconocimiento total del modelo. Para ambos casos las funciones de valor son definidas explícitamente por la estructura de un aproximador paramétrico, donde el vector de pesos del aproximador es sintonizado a través de un proceso iterativo de estimación de parámetros. Cuando se tiene conocimiento parcial de la dinámica se usa el método de aprendizaje por diferencias temporales en un entrenamiento episódico, que utiliza el esquema de mínimos cuadrados con mínimos cuadrados recursivos en la sintonización del crítico y descenso del gradiente en la sintonización del actor, el mejor resultado para este esquema es usando el algoritmo de iteración de valor para la solución de la ecuación de Bellman, con un resultado significativo en términos de precisión en comparación a los valores óptimos (función DLQR). Cuando se tiene desconocimiento de la dinámica se usa el algoritmo Q-learning en entrenamiento continuo, con el esquema de mínimos cuadrados con mínimos cuadrados recursivos y el esquema de mínimos cuadrados con descenso del gradiente. Ambos esquemas usan el algoritmo de iteración de política para la solución de la ecuación de Bellman, y se obtienen resultados de aproximadamente 0.001 en la medición del error cuadrático medio. Se realiza una prueba de adaptabilidad considerando variaciones que puedan suceder en los parámetros de la planta, siendo el esquema de mínimos cuadrados con mínimos cuadrados recursivos el que tiene los mejores resultados, reduciendo significativamente ...
Reinforcement learning is a machine learning paradigm with extensive development and growing demand in decision-making and control applications. This technique allows the design of controllers that do not directly depend on the model describing the system dynamics. It is useful in real-world applications, where accurate models are often unavailable. The objective of this work is to implement a modelfree discrete-time optimal controller. Through discrete models, we implemented reinforcement learning algorithms focused on systems with continuous state and action spaces. The concepts of value-function, Q-function, V -function, and the Bellman equation are employed to solve the linear quadratic regulator problem for a mass-spring-damper system in a partially known and utterly unknown model. For both cases, the value functions are explicitly defined by a parametric approximator’s structure, where the weight vector is tuned through an iterative parameter estimation process. When partial knowledge of the dynamics is available, the temporal difference learning method is used under episodic training, utilizing the least squares with a recursive least squares scheme for tuning the critic and gradient descent for the actor´s tuning. The best result for this scheme is achieved using the value iteration algorithm for solving the Bellman equation, yielding significant improvements in approximating the optimal values (DLQR function). When the dynamics are entirely unknown, the Q-learning algorithm is employed in continuous training, employing the least squares with recursive least squares and the gradient descent schemes. Both schemes use the policy iteration algorithm to solve the Bellman equation, and the system’s response using the obtained values was compared to the one using the theoretical optimal values, yielding approximately zero mean squared error between them. An adaptability test is conducted considering variations that may occur in plant parameters, with the least squares with recursive least squares scheme yielding the best results, significantly reducing the number of iterations required for convergence to optimal values.
aprendizaje por refuerzo, control óptimo, control adaptativo, sistemas mecánicos, libre de modelo, dinámica totalmente desconocida, aproximación paramétrica, Q-learning, iteración de política reinforcement learning, optimal control, adaptive control, mechanical systems, modelfree, utterly unknown dynamics, parametric approximation, Q-learning, policy iteration INGENIERÍA Y TECNOLOGÍA CIENCIAS TECNOLÓGICAS TECNOLOGÍA DE LOS ORDENADORES INTELIGENCIA ARTIFICIAL INTELIGENCIA ARTIFICIAL
Martin van Ittersum (2023, [Artículo])
Context: Collection and analysis of large volumes of on-farm production data are widely seen as key to understanding yield variability among farmers and improving resource-use efficiency. Objective: The aim of this study was to assess the performance of statistical and machine learning methods to explain and predict crop yield across thousands of farmers’ fields in contrasting farming systems worldwide. Methods: A large database of 10,940 field-year combinations from three countries in different stages of agricultural intensification was analyzed. Random effects models were used to partition crop yield variability and random forest models were used to explain and predict crop yield within a cross-validation scheme with data re-sampling over space and time. Results: Yield variability in relative terms was smallest for wheat and barley in the Netherlands and for wheat in Ethiopia, intermediate for rice in the Philippines, and greatest for maize in Ethiopia. Random forest models comprising a total of 87 variables explained a maximum of 65 % of cereal yield variability in the Netherlands and less than 45 % of cereal yield variability in Ethiopia and in the Philippines. Crop management related variables were important to explain and predict cereal yields in Ethiopia, while predictive (i.e., known before the growing season) climatic variables and explanatory (i.e., known during or after the growing season) climatic variables were most important to explain and predict cereal yield variability in the Philippines and in the Netherlands, respectively. Finally, model cross-validation for regions or years not seen during model training reduced the R2 considerably for most crop x country combinations, while for wheat in the Netherlands this was model dependent. Conclusion: Big data from farmers’ fields is useful to explain on-farm yield variability to some extent, but not to predict it across time and space. Significance: The results call for moderate expectations towards big data and machine learning in agronomic studies, particularly for smallholder farms in the tropics where model performance was poorest independently of the variables considered and the cross-validation scheme used.
Model Accuracy Model Precision Linear Mixed Models CIENCIAS AGROPECUARIAS Y BIOTECNOLOGÍA MACHINE LEARNING SUSTAINABLE INTENSIFICATION BIG DATA YIELDS MODELS AGRONOMY