personalized treatment of antivitamins K

published by Medicacom The 28 May 2021

Acenocoumarol is an anticoagulant drug used in the prevention of thromboembolic diseases
in infarcts and transient ischemic attacks,
as well as in the management of deep vein thrombosis and myocardial infarction.

acenocoumarol is a derivative of coumarin it inhibits the reduction of vitamin K by vitamin K reductase.
This prevents the carboxylation of vitamin k dependent coagulation factors II, VII, IX and X, and
interferes with coagulation, Hematocrit, hemoglobin, international normalized ratio (INR) and hepatic panel should be

our Objective is to establish a pharmacogenetic algorithm to predict an adequate AC dose to stabilize anticoagulation
based on inter-individual variability that includes clinical and genetic factors of the response to acenocoumarol (AC)

This application was developed by medicacom in partnership with the sahloul sousse biochemistry service.
The latter has already worked on this issue and published the study which can be consulted by following this link:

We have used diverse tools and technologies to achieve the most optimized results.

We programmed the model with the Python programming language and Machine and Deep Learning libraries and algorithms.

The process was as follows:

* Exploratory data analysis to detect ideas that may be useful in the modeling phase.
* Data processing with the imputation of missing values by the iterative ADA BOOST REGRESSOR imputator of Sickit Learn and the application of certain transformations.
* Selection of variables through the Lasso regularized regression algorithm.
* Modeling through the linear regression algorithm and its variations (Lasso, Ridge, ElasticNet, XGBregressor, Multi-Layer-Perceptron, RandomForestRegressor, Stochastic Gradient Descent Regressor).

In addition to this, we used the Deep Artificial Neural Network with KERAS and TensorFlow.
* Adjustment and optimization of hyperparameters by GRID Search with cross validation. (GRID-SEARCH-CV)

Note that machine learning and deep learning algorithms and models require a large volume of data to ensure robust and accurate results.
But, due to the low volume of data we have so far, the project will be reviewed and optimized in the future to produce better results with more data.