Búsqueda avanzada


Área de conocimiento




63 resultados, página 7 de 7

Genetic analysis of Vibrio parahaemolyticus O3:K6 strains that have been isolated in Mexico since 1998

CARLOS ABRAHAM GUERRERO RUIZ (2017, [Artículo])

Vibrio parahaemolyticus is an important human pathogen that has been isolated worldwide from clinical cases, most of which have been associated with seafood consumption. Environmental and clinical toxigenic strains of V. parahaemolyticus that were isolated in Mexico from 1998 to 2012, including those from the only outbreak that has been reported in this country, were characterized genetically to assess the presence of the O3:K6 pandemic clone, and their genetic relationship to strains that are related to the pandemic clonal complex (CC3). Pathogenic tdh+ and tdh+/trh+ strains were analyzed by pulsed-field gel electrophoresis (PFGE) and multilocus sequence typing (MLST). Also, the entire genome of a Mexican O3:K6 strain was sequenced. Most of the strains were tdh/ORF8-positive and corresponded to the O3:K6 serotype. By PFGE and MLST, there was very close genetic relationship between ORF8/O3:K6 strains, and very high genetic diversities from non-pandemic strains. The genetic relationship is very close among O3:K6 strains that were isolated in Mexico and sequences that were available for strains in the CC3, based on the PubMLST database. The whole-genome sequence of CICESE-170 strain had high similarity with that of the reference RIMD 2210633 strain, and harbored 7 pathogenicity islands, including the 4 that denote O3:K6 pandemic strains. These results indicate that pandemic strains that have been isolated in Mexico show very close genetic relationship among them and with those isolated worldwide. © 2017 Guerrero et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Article, bacterial strain, biofouling, controlled study, Crassostrea, food intake, gene sequence, genetic analysis, genetic variability, Japan, Mexican, Mexico, molecular phylogeny, nonhuman, pandemic, pathogenicity island, sea food, serotyping, toxi BIOLOGÍA Y QUÍMICA CIENCIAS DE LA VIDA GENÉTICA GENÉTICA

Control de sistemas usando aprendizaje de máquina

Systems control using machine learning

Jesús Martín Miguel Martínez (2023, [Tesis de maestría])

El aprendizaje por refuerzo es un paradigma del aprendizaje de máquina con un amplio desarrollo y una creciente demanda en aplicaciones que involucran toma de decisiones y control. Es un paradigma que permite el diseño de controladores que no dependen directamente del modelo que describe la dinámica del sistema. Esto es importante ya que en aplicaciones reales es frecuente que no se disponga de dichos modelos de manera precisa. Esta tesis tiene como objetivo implementar un controlador óptimo en tiempo discreto libre de modelo. La metodología elegida se basa en algoritmos de aprendizaje por refuerzo, enfocados en sistemas con espacios de estado y acción continuos a través de modelos discretos. Se utiliza el concepto de función de valor (Q-función y función V ) y la ecuación de Bellman para resolver el problema del regulador cuadrático lineal para un sistema mecánico masa-resorte-amortiguador, en casos donde se tiene conocimiento parcial y desconocimiento total del modelo. Para ambos casos las funciones de valor son definidas explícitamente por la estructura de un aproximador paramétrico, donde el vector de pesos del aproximador es sintonizado a través de un proceso iterativo de estimación de parámetros. Cuando se tiene conocimiento parcial de la dinámica se usa el método de aprendizaje por diferencias temporales en un entrenamiento episódico, que utiliza el esquema de mínimos cuadrados con mínimos cuadrados recursivos en la sintonización del crítico y descenso del gradiente en la sintonización del actor, el mejor resultado para este esquema es usando el algoritmo de iteración de valor para la solución de la ecuación de Bellman, con un resultado significativo en términos de precisión en comparación a los valores óptimos (función DLQR). Cuando se tiene desconocimiento de la dinámica se usa el algoritmo Q-learning en entrenamiento continuo, con el esquema de mínimos cuadrados con mínimos cuadrados recursivos y el esquema de mínimos cuadrados con descenso del gradiente. Ambos esquemas usan el algoritmo de iteración de política para la solución de la ecuación de Bellman, y se obtienen resultados de aproximadamente 0.001 en la medición del error cuadrático medio. Se realiza una prueba de adaptabilidad considerando variaciones que puedan suceder en los parámetros de la planta, siendo el esquema de mínimos cuadrados con mínimos cuadrados recursivos el que tiene los mejores resultados, reduciendo significativamente ...

Reinforcement learning is a machine learning paradigm with extensive development and growing demand in decision-making and control applications. This technique allows the design of controllers that do not directly depend on the model describing the system dynamics. It is useful in real-world applications, where accurate models are often unavailable. The objective of this work is to implement a modelfree discrete-time optimal controller. Through discrete models, we implemented reinforcement learning algorithms focused on systems with continuous state and action spaces. The concepts of value-function, Q-function, V -function, and the Bellman equation are employed to solve the linear quadratic regulator problem for a mass-spring-damper system in a partially known and utterly unknown model. For both cases, the value functions are explicitly defined by a parametric approximator’s structure, where the weight vector is tuned through an iterative parameter estimation process. When partial knowledge of the dynamics is available, the temporal difference learning method is used under episodic training, utilizing the least squares with a recursive least squares scheme for tuning the critic and gradient descent for the actor´s tuning. The best result for this scheme is achieved using the value iteration algorithm for solving the Bellman equation, yielding significant improvements in approximating the optimal values (DLQR function). When the dynamics are entirely unknown, the Q-learning algorithm is employed in continuous training, employing the least squares with recursive least squares and the gradient descent schemes. Both schemes use the policy iteration algorithm to solve the Bellman equation, and the system’s response using the obtained values was compared to the one using the theoretical optimal values, yielding approximately zero mean squared error between them. An adaptability test is conducted considering variations that may occur in plant parameters, with the least squares with recursive least squares scheme yielding the best results, significantly reducing the number of iterations required for convergence to optimal values.

aprendizaje por refuerzo, control óptimo, control adaptativo, sistemas mecánicos, libre de modelo, dinámica totalmente desconocida, aproximación paramétrica, Q-learning, iteración de política reinforcement learning, optimal control, adaptive control, mechanical systems, modelfree, utterly unknown dynamics, parametric approximation, Q-learning, policy iteration INGENIERÍA Y TECNOLOGÍA CIENCIAS TECNOLÓGICAS TECNOLOGÍA DE LOS ORDENADORES INTELIGENCIA ARTIFICIAL INTELIGENCIA ARTIFICIAL

Phylogenetic relationships of Pseudo-nitzschia subpacifica (Bacillariophyceae) from the Mexican Pacific, and its production of domoic acid in culture

Sonia Quijano (2020, [Artículo])

Pseudo-nitzschia is a cosmopolitan genus, some species of which can produce domoic acid (DA), a neurotoxin responsible for the Amnesic Shellfish Poisoning (ASP). In this study, we identified P. subpacifica for the first time in Todos Santos Bay and Manzanillo Bay, in the Mexican Pacific using SEM and molecular methods. Isolates from Todos Santos Bay were cultivated under conditions of phosphate sufficiency and deficiency at 16°C and 22°C to evaluate the production of DA. This toxin was detected in the particulate (DAp) and dissolved (DAd) fractions of the cultures during the exponential and stationary phases of growth of the cultures. The highest DA concentration was detected during the exponential phase grown in cells maintained in P-deficient medium at 16°C (1.14 ± 0.08 ng mL-1 DAd and 4.71 ± 1.11 × 10−5 ng cell-1 of DAp). In P-sufficient cultures DA was higher in cells maintained at 16°C (0.25 ± 0.05 ng mL-1 DAd and 9.41 ± 1.23 × 10−7 ng cell-1 of DAp) than in cells cultured at 22°C. Therefore, we confirm that P. subpacifica can produce DA, especially under P-limited conditions that could be associated with extraordinary oceanographic events such as the 2013–2016 "Blob" in the northeastern Pacific Ocean. This event altered local oceanographic conditions and possibly generated the presence of potential harmful species in areas with economic importance on the Mexican Pacific coast. © 2020 Quijano-Scheggia et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

domoic acid, domoic acid, kainic acid, Article, cell growth, controlled study, diatom, Mexico, morphology, nonhuman, Pacific Ocean, phylogeny, plant cell, plant growth, Pseudo nitzschia, toxin analysis, cell culture technique, classification, diatom, CIENCIAS FÍSICO MATEMÁTICAS Y CIENCIAS DE LA TIERRA CIENCIAS DE LA TIERRA Y DEL ESPACIO OCEANOGRAFÍA OCEANOGRAFÍA