Only 13% of data projects make it into Production. Data is not enough; you need: 1. A pipeline that connects to your data sources. 2. To train your models. 3. Build a UI fast enough to render the result and chart with millions of records. 4. Finally, deploy your product so your customers can use it. Most people try to build this from scratch. Instead, I use @Taipy_io - You can create pipelines connected to your data sources (Databricks, Snowflake). - You can transform your data. - You can train and test your models. - You can connect my pipeline results to the Frontend. - Taipy is cloud agnostic. You can deploy on any cloud platform regardless of where the data resides and how it is consumed. - You only need to know the tech stack: Python. - Taipy is open source; I can contribute if I need. I’m obsessed with putting things in front of clients as soon as possible. I use Taipy because I can spin a prototype with a few lines of code and put a product in front of my users. Take a look at the examples in their repo and give a star to this cool library: github.com/Avaiga/taipy Big thanks to Taipy for collaborating with me in this post.
@RaulJuncoV How is it different from an pipeline orchestration tool like Airflow ?
@RaulJuncoV Sounds like a promising tool we could leverage! Thanks for sharing, Raul!
@RaulJuncoV looks like @Taipy_io effectively handles data to deployment.
@RaulJuncoV I tried this after you first tweeted about this, it's nice to use !!!