Translate

Pages

Pages

Pages

Intro Video
Showing posts with label Blog – DataRobot. Show all posts
Showing posts with label Blog – DataRobot. Show all posts

Friday, November 6, 2020

Using Automated Computer Vision to See the Big Picture

Data comes in all shapes and sizes. Just like our daily decision-making processes, the more points of view we have, the better the potential outcome. For example, when considering a hospital readmission dataset, we can use traditional features, such as patients’ gender, readmission date, and the number of previous lab procedures. In order to strengthen the artificial intelligence (AI) models and get even better predictions, we can add, for example, an image of a patient’s MRI and raw text from doctors’ notes. 

In the webinar DataRobot Visual AI: See the Big Picture with Automated Computer Vision, we explored these issues, discussing how deep learning and computer vision algorithms work and the way companies can apply DataRobot Visual AI for their daily operations.

Visual AI Demystifies Deep Learning

You don’t need to be an expert in data science and deep learning to understand the process of training neural networks and creating models by yourself. DataRobot Visual AI is extremely easy to use. Just drag and drop a zip file containing your images, their labels, and any other set of features you like into DataRobot to get started. You then pick your target feature and hit the Start button in the usual way. 

The deep learning models DataRobot builds are just as easy to understand. Visual AI provides detailed blueprints to explain every pre-processing step and which algorithms were used and a variety of human-friendly visual insights to help you interpret results, including image activation maps and row-by-row prediction explanations.

Visual AI is part of DataRobot’s Automated Machine Learning product which is tightly integrated with DataRobot MLOps. MLOps makes it incredibly easy to deploy your deep learning model to your production environment of choice in just a few clicks. You are then able to monitor and manage your Visual AI models over its full production lifecycle in a tightly governed environment.

Example Visual AI Use Cases

How do companies use automated computer vision? In the webinar, we discuss some examples, including the early detection of forest fires from aerial photography or the assessment of hurricane property damage using images from drones in areas that inspectors on the ground cannot access. Another application discussed was the automated capability to distinguish people who are wearing masks from those who are not before being admitted into a building or office. 

In the healthcare industry, providers can use more complex models with a wide variety of features to recognize types of skin lesion and the probability of skin cancer developing. 

Conclusion

The different ways in which DataRobot Visual AI can be used is limited only by the human imagination. If you are interested in learning more about how Visual AI can add value to your business or organization, be sure to watch the whole webinar, DataRobot Visual AI: See the Big Picture with Automated Computer Vision.

On-demand Webinar
DataRobot Visual AI: See the Big Picture with Automated Computer Vision

The post Using Automated Computer Vision to See the Big Picture appeared first on DataRobot.



from Blog – DataRobot https://ift.tt/3euqzJH
via Gabe's MusingsGabe's Musings

Wednesday, October 21, 2020

Countdown to AI Experience EMEA Virtual Conference

The AI Experience EMEA Virtual Conference is just three weeks away, and Team DataRobot couldn’t be more excited about what our speakers have in store for our virtual audience. Conference attendees will walk away with pragmatic ideas for how to accelerate time to impact and value for AI, and practical strategies to implement those ideas within their organizations — all at no cost to attend. This day-long event is an experience that no AI visionary will want to miss. 

We’re pleased to announce two keynote speakers that conference attendees can look forward to hearing from on November 10th. 

Brian Prestidge: Director of Insights & Decision Technology at Manchester City Football Club

BP 2020 21 1

With 15 years working in professional football in various roles supporting elite performance practitioners, Brian has seen the technological developments that have created an exponential growth in data available to football clubs. Whether through the use of AI & simulation for player development or the application of robust data science methods to support coaches in their game preparation, Brian & his team play a key role at City Football Group in enabling them to make better and faster decisions in the very dynamic and heavily scrutinised environment of professional football.

Dr. Hannah Fry: Associate Professor in the Mathematics of Cities, Centre for Advanced Spatial Analysis at University College London 

Hannah Fry Main Website image speaking

Dr. Hannah Fry is an Associate Professor in the Mathematics of Cities at the Centre for Advanced Spatial Analysis at UCL where she studies patterns in human behavior. Her research applies to a wide range of social problems and questions, from shopping and transport to urban crime, riots and terrorism.

Her critically acclaimed BBC documentaries include Horizon: Diagnosis on Demand? The Computer Will See You Now, Britain’s Greatest Invention, City in the Sky (BBC Two), Magic Numbers: Hannah Fry’s Mysterious World of Maths, The Joy of Winning, The Joy of Data, Contagion! The BBC Four Pandemic and Calculating Ada (BBC Four). She also co-presents The Curious Cases of Rutherford and Fry (BBC Radio 4) and The Maths of Life with Lauren Laverne (BBC Radio 6).

Hannah is the author of Hello World, published in 2018.


We hope you’ll join us on November 10th to hear these keynotes, and our full lineup of powerhouse speakers, share their insights on impactful, trustworthy AI. Leaders from Bayer Pharmaceuticals, Deutsche Post DHL Group, Medical Faculty Manheim, Heidelberg University, and more will help you understand how to leverage AI to address hyper-critical issues impacting your organization.

Virtual Event
AI Experience EMEA Virtual Conference: Accelerating Impact With Trusted AI

The post Countdown to AI Experience EMEA Virtual Conference appeared first on DataRobot.



from Blog – DataRobot https://ift.tt/3kiPGkz
via Gabe's MusingsGabe's Musings

Monday, October 19, 2020

AI in Financial Markets, Part 5 of 4: Flippantly Answered Questions

This was going to be a four-part blog series. I figured that I’d covered most of what financial markets participants might be interested in hearing about when it comes to AI in the markets (and had probably bored the audience for long enough, all things considered). Then we did a webinar on the topic together with FactSet, had a great turnout, and got asked lots of great questions from our listeners — so many, in fact, that we had to answer them by email as we ran out of time.  And the questions were sufficiently good that we thought you might be interested in reading the Q&A, too, so here goes.

Financial markets are known to have an extremely non-stable structure. How do you find the right balance within your model between having a model that reacts quickly using mostly recent data (to have a better fit) vs. having a more stable model that can use longer (hence, more) data?

In one word: empirically. Remember that in financial markets, in particular, and time-series modeling in general, more history doesn’t automatically mean that you get better models. If a longer history means that you enter a different behavioral régime, your model performance will deteriorate. So trying out different lengths of the training period can be a valuable way of discovering how persistent a behavior actually is. Don’t go nuts, though. You need to avoid trying out so many variations that you end up with something that, by pure randomness, looks good but doesn’t generalize. You can avoid this kind of overfitting by being specific in your hypothesis formulation and rigorously testing your candidate models on multiple out-of-sample, out-of-time datasets; at all costs, avoid selection and look-ahead biases.

Are there particular areas of the markets where you see machine learning working better or worse than in others?

We see machine learning models adding value in lots of different areas of the markets and many different asset classes. As previously discussed, we see particular power in predicting second-order variables (i.e. modeling variables that influence returns, instead of directly predicting returns) and in a variety of use cases that form part of the business of financial markets (see blog post 1).

One interesting pattern that we have noticed is that generally, the longer the prediction horizon or data frequency, the more difficult it is to build good machine learning models.  So quarterly numbers tend to be very difficult to work with, and monthly numbers can be a struggle sometimes, too. With such infrequent data, there is a trade-off between using a very limited number of data points in order to not go beyond the current behavioral environment (limited data often makes it hard to build good models), or working with a longer history that spans multiple behavioral régimes (thus being a worse fit for the individual ones, so again it’s hard to build good models). On the other hand, there are a lot of great use cases using tick and market microstructure data, which is a bit like a firehose: if you need some more data to work with, you just need to open the tap for a little while.

Another pattern we see is not a surprise: the less efficient the price discovery mechanism is, the more value that machine learning models can add—as long as there’s enough training data. So machine learning models probably won’t add much value on the U.S. Treasury Bond future, nor will they on a corporate bond that trades once every six months: the sweet spot will be somewhere in between.

One other thing that’s worth mentioning: simulation-type problems can be a bit of a tricky fit for supervised machine learning, as there’s often no clear concept of a “target variable” to predict. That said, you can use machine learning models to make the predictions which enter the simulations, instead of linear or parametric models; this generally doesn’t make the process any faster, but allows the simulator to take advantage of non-linearity, which can help. In some cases, machine learning can also be used to predict simulation outcomes as a function of the simulation input parameters; this can, for instance, make it much faster to price certain exotic derivatives.

You mentioned the difficulties that certain machine learning algorithms have with extrapolating beyond the bounds of the training data. If need be, can you focus on those algorithms that are able to extrapolate in DataRobot?

Yes, very easily — DataRobot’s leaderboard ranks the different candidate blueprints, models and algorithms by their out-of-sample performance. If you don’t want to use a model based on, say, decision trees, you would simply select a model trained using another algorithm family. The leaderboard comparison will show you whether there’s a trade-off between that model and the tree-based models in terms of out-of-sample performance.

As a reminder: even if an algorithm is able to make predictions that go beyond the limits of the training data, those predictions won’t necessarily make any sense, as you have no experience on whether the behaviors inside the training data are consistent in environments beyond it. Proceed with extreme caution!

How would you handle scenarios where certain predictors are missing data for a few years in a longitudinal dataset, maybe because the data gathering did not begin for that predictor until recently?

First, I’d check whether the predictors actually add any value to the model by building a few different variants of the model: one trained on the full dataset including the variables with limited history (let’s call this Model A), trained over the period for which the full data is available, and another trained over the same period of time but excluding the limited-history variables 

(Model B). I’d also train a third model covering the full history that excludes the limited history variables (Model B*). If Model A performs better than Model B, I probably would take that result and not investigate further; if it doesn’t, the comparison between Model B and Model B* will tell me whether adding further history actually helps model performance. If it does, and it’s better than Model A, I’d look for proxies for the limited history variables with a longer history; if not, Model A is good to go.  

If you’re referring to a scenario where you’re looking to backtest a strategy over a longer period of time and some of the data in your current model wouldn’t have been available in past periods, the solution is even simpler: evaluate a model built on just the longer history data for the period when the shorter history data isn’t available, then use a model built on the full dataset once it’s available.

So, tl;dr: try out different variants, empiricism wins again. Don’t go crazy with the different variants, as you don’t want to do the data science version of p-value hacking (quants will know this as the dreaded “data mining”). But comparing multiple models built in different ways usually gives good insights, especially when using DataRobot’s standardized analytics.  

We hear a lot about the hybrid approach in machine learning. What is it, and does DataRobot support it?

Generally, hybrid approaches in machine learning combine two or more different types of algorithms in order to reduce model error and potentially solve problems which the individual algorithms would be less suited to, or less performant at. DataRobot has quite a few blueprints (machine learning pipelines) which use such approaches, typically combining a supervised machine learning algorithm (one that is designed to predict a target variable by learning from historical observations) with one or more unsupervised learning techniques (clustering, dimensionality reduction). We find that adding clustering, in particular, to a supervised machine learning algorithm like XGBoost can reduce prediction error by 10-15%, depending on the use case.

How does the greedy search algorithm to populate DataRobot’s leaderboard work?

In a nutshell: we first identify the set of all the machine learning pipelines (“blueprints”) that can be applied to the problem at hand, then use a combination of heuristics (to ensure algorithm diversity) and recommendation (to identify those blueprints that are likely to be performant) to identify the initial algorithms. Multiple rounds of model training ensue, starting with a large spectrum of blueprints that are trained on small amounts of data, gradually reducing the number of blueprints trained (filtering on out-of-sample performance), while increasing the size of the training data, finally cross-validating the best-performing algorithms and trying out some ensembles to see whether this will further improve the performance.

Please elaborate on the different types of feature extraction that DataRobot does.

DataRobot does four main kinds of feature extraction and selection automatically: 

  • Transforming features to match a particular machine learning algorithm or make it more performant (automated feature engineering), including dimensionality reduction using techniques such as principal-component analysis or singular value decomposition
  • Evaluating differences, ratios and other transformations and combinations in datasets where observations are independent (automated feature discovery)
  • Constructing rolling transformations and evaluating different lags in time series problems where autoregressiveness is present (time series feature engineering)
  • Automatically generating a reduced feature list on a modeling project’s best model and retraining it (automated feature selection) 

Additionally, users have the flexibility to build a wide range of feature transformations using the DataRobot Paxata data preparation platform before pushing the data to DataRobot MLdev.  The MLdev API also integrates seamlessly with Python and R’s powerful data preparation capabilities, as well as providing connectivity to other databases such as KDB.

What are the advantages of an enterprise solution like DataRobot compared to open platforms like scikit-learn or Tensorflow?

Cutting edge data science and machine learning are simply unthinkable without open-source packages such as Tensorflow; this is where the innovation lies these days. That said, DataRobot is not built by the crowd. We have some 350 incredibly talented engineers and data scientists on our team, whose job it is to engineer our platform to enterprise grade and work with our customers to ensure that it meets their needs. This includes a number of contributors to popular open-source libraries such as numpy, pandas, scikit-learn, keras, caret, pic2vec, urllib3 and many others.   

So we take the best of what’s out there in the open-source data science community and ensure that it’s suitable for enterprise use — contributing to the open source community itself when needed to make this happen.  For example, recently members of our team were building some modeling pipelines, including elements from an open-source machine learning library which we had not previously supported. Their testing revealed some critical bugs under the hood and development efforts were then refocused towards fixing the actual open-source library and pushing those changes out to the community.   

With a “best of both worlds” solution such as DataRobot, there’s still someone at the end of a phone to shout at if there’s an issue. And you don’t have to worry about making sure that all the parts of the open source stack are up-to-date either.

Does the DataRobot engine run on my desktop computer? How is performance managed, CPU vs GPU selection, etc?

DataRobot is a powerful platform whose requirements exceed the capabilities of a single desktop computer. There are various ways of running the DataRobot platform

  • On DataRobot’s managed AI cloud 
  • Via the FactSet Workstation, with the backend running on Factset’s AWS cloud 
  • Inside your enterprise’s firewall, on a Virtual Private Cloud such as Microsoft Azure, Amazon Web Services or Google Cloud
  • Inside your enterprise’s firewall, on a data lake/cluster running Hadoop; and
  • Inside your enterprise’s firewall, on a bare-metal Linux cluster 

Performance is managed dynamically by the DataRobot app engine, with the user being able to choose how much compute to apply to a modeling project by selecting the number of workers (each worker being able to train one machine learning model at one time). DataRobot runs entirely on CPUs, no expensive GPUs are needed.

Would you say that DataRobot’s learning curve is digestible for a portfolio manager or analyst, or is it targeted at in-house data analysts and quants who would live in the app?

I’d say that a certain amount of data literacy is important — I wouldn’t expect great results from giving DataRobot to an “old school” portfolio manager who struggles with Excel, for instance. We have two target user groups: first, people who understand the data well but aren’t quants or machine learning experts and want to be able to harness the power of machine learning without needing to get technical or learn how to code. We greatly automate the process with smart default settings and a variety of guardrails for this “democratization” audience. Through the process of using DataRobot and its built-in explainability and documentation, such users learn a lot about machine learning and how to frame machine learning problems, often quickly moving on to complex problem sets.

Our other target group is, of course, sophisticated quants and data scientists, who use DataRobot’s automation as a force multiplier for their productivity, by automating the boring, repetitive stuff where they don’t necessarily have an edge.

Is there a course designed around DataRobot to give us hands-on experience?

A wide variety of instructor-led and self-paced training programmes for different skill levels are available at https://university.datarobot.com/, with further resources and self-paced learning at DataRobot Community’s learning center: https://community.datarobot.com/t5/learning-center/ct-p/Learning 

There’s also the DataRobot free trial, details at:

In your demo, you built 72 different models to solve this binary classification problem. Some users may not have the expertise in machine learning to make the choice between models, and blindly using a model can be dangerous. What do you do to prevent from giving a machine gun to a 3 year old?

Great question.  It’s a combination of several things that work together. 

First, make sure that the machine gun has “safety mechanisms” such as built-in best practices and guardrails. For example, rank models strictly on their out-of-sample performance and no in-sample performance data ever being exposed, and combine the appropriate data engineering with each algorithm in the machine learning pipelines.  

Second, train the users in “gun safety.”  This doesn’t have to take that long — for instance, our Citizen Data Scientist Starter Quest takes an afternoon and is self paced; our AutoML I course consists of three four-hour sessions — but provides valuable context in how to frame machine learning problems and evaluate the models.  

Third, make sure that the gun’s “scope” shows the users what they’re pointing at: provide users with sophisticated, standardized analytics that allow them to evaluate each model’s performance in-depth and understand the model’s drivers and how the model will respond in different scenarios.

And finally, support the users with experienced data scientists, a wealth of self-service content, and a growing online user community. (Sorry, ran out of gun metaphors.)

What, over 2,000 words and you still haven’t answered my question?

Hot damn.  Come on over to community.datarobot.com and we’ll do our best to answer it there.

Check out all of the blog series: part 1part 2part 3part 4.

Webinar
Machine Learning for Quant Investing with DataRobot on FactSet

The post AI in Financial Markets, Part 5 of 4: Flippantly Answered Questions appeared first on DataRobot.



from Blog – DataRobot https://ift.tt/2IA7f1x
via Gabe's MusingsGabe's Musings

Tuesday, October 6, 2020

Tableau Conference-ish 2020

Unlock AI For BI

DataRobot is proud to once again be sponsoring Tableau Conference, or Tableau Conference-ish 2020 (TC20) as it has been minted this year for its virtual format. From October 6-8th, we invite you to learn more about the perfect union of AI and BI at our virtual booth. Swing by for helpful content, the opportunity to sign up for a free trial (tailored just for you), and some sought-after DataRobot swag.  

TC20 will also provide the opportunity to hear and learn directly from joint DataRobot + Tableau customers during our session, “DataRobot: Unlock AI for BI.” Hear from data leaders from the NHS, Alcon, and our host, Tableau, as they speak with me, Karin Jenson, Director of Business Analyst AI Success. Panelists will share how they use DataRobot with Tableau to leverage AI and automated machine learning to solve complex predictive problems easily. Our panelists will share jaw-dropping use cases and talk about the upward career trajectory that the incorporation of AI and machine learning into business analytics has provided for them. Their stories are truly inspirational and not to be missed. Be sure to take a look at the TC20 agenda to take note of when the session episode airs for you locally.

After the session, join me and other members of the DataRobot team at our virtual booth, where you can sign up for the TC20-specific trial. Not only will the trial get you hands-on experience with DataRobot, but you’ll also get: 

  • Informative resources and self-paced content that will help you succeed in your predictive insights journey
  • Community support specifically for new users coming from Tableau Conference 2020
  • The opportunity to join the DataRobot for the Business Analyst advisory board. This exclusive invitation gives you access to DataRobot swag, beta sneak peeks, assistance from leading data scientists, and invitations to quarterly meetings to weigh in on what you need from DataRobot to further meet your needs.

We look forward to connecting with you virtually at Tableau Conference-ish. Until then, please take a moment to get pumped up with a sneak peak of one of our commercials airing during the Super Bowl of analytics. 

Virtual Event
Join us at TC20

October 6-8, 2020

The post Tableau Conference-ish 2020 appeared first on DataRobot.



from Blog – DataRobot https://ift.tt/34oXkDk
via Gabe's MusingsGabe's Musings

Friday, October 2, 2020

AI Experience EMEA Virtual Conference: Accelerating Impact With Trusted AI

In 2020, we’re all pivoting business strategies, focusing on impact, and finding ways to “do it better and different” (coincidentally, a DataRobot core value.) Organizations across industries have needed to become more resilient and focus on reducing costs and risks, retaining customers, and finding new revenue streams. 

In June, we held our first-ever virtual AI conference, AI Experience Worldwide to address these challenges. The conference was such a hit amongst attendees that we held another for our APAC audience. And now, we’re coming for EMEA.

November 10, 2020, join us for a full day of AI insights from our in-house experts and industry leaders from a wide range of industries. 

This event is designed to bring together AI visionaries, creators and consumers of AI, partners, customers, and decision makers to:

  • Learn how pragmatic, value-focused AI is making a measurable impact on organizations.
  • Understand how to address the hyper-critical issues impacting your organization today, throughout the rest of 2020 and into 2021. 
  • Hear AI-driven use cases from pioneers across a range of industries.
  • Network with industry peers facing the exact same challenges you are.

Trustworthy, impactful AI is an increasingly important priority for organizations. In the recently-released 2020 Gartner Hype Cycle for Artificial Intelligence, algorithmic trust, democratized AI, and AI governance were prominently featured. As you plan for 2021, it’s critical to derive the most value you can from your data. 

Our goal is for you to walk away from the AI Experience EMEA Virtual Conference with actionable insights to take your organization to the next level of AI Success with trustworthy, impactful AI. Wherever you are on your AI journey, we’re committed to being your trusted partner every step of the way.

VIRTUAL CONFERENCE
AI Experience EMEA Virtual Conference: Accelerating Impact With Trusted AI

The post AI Experience EMEA Virtual Conference: Accelerating Impact With Trusted AI appeared first on DataRobot.



from Blog – DataRobot https://ift.tt/34doA7I
via Gabe's MusingsGabe's Musings

Monday, August 24, 2020

Forrester Total Economic Impact Study of DataRobot: 514% ROI and Payback within 3 Months

With AI proving to be the most transformative technology of our time and companies today need to pivot faster and drive near-term impact from their tech investments, many organizations are looking to drive higher ROI from AI as quickly as possible. From predicting customer churn to reducing fraud and avoiding supply chain disruptions, the possibilities are virtually limitless as to how AI can increase revenue, reduce costs, and improve profitability. But how can companies predict the expected value of an AI application or use case so they can justify new investments in the face of tight budgets and headcounts?

To help answer this question, DataRobot today announced the results of a new study: The Total Economic Impact (TEI) of DataRobot. Conducted by Forrester Consulting on behalf of DataRobot, the commissioned study reveals that organizations using DataRobot’s AI platform achieve a net present value (NPV) of $4 million and a 514% return on investment (ROI) with payback often within as short as 3 months.

Forrester interviewed four customers with experience using DataRobot in the retail, healthcare, and manufacturing sectors as the basis for the report to help them better understand the benefits, costs, and risks associated with using the platform. These customers were looking to overcome key challenges, such as forecasting demand and tackling fraud. 

Prior to using DataRobot, the customers relied on their data science teams to do the heavy lifting of data preparation, model development and training, and model deployment using traditional open-source technologies, such as the Python and R programming languages and their associated libraries and frameworks.

Some customers were also hindered by their use of legacy data analysis technologies that failed to keep pace with the advancements in AI and machine learning over the past decade. This created environments with lengthy AI project timelines and frequently missed deadlines where organizations often never deployed and operationalized the models they developed.

Forrester Consulting designed a composite organization based on the characteristics of the organizations interviewed. They then constructed a financial model representative of the interview finding, using the TEI methodology based on four fundamental elements: benefits, costs, flexibility, and risks. Forrester’s TEI methodology serves to provide a complete picture of the total economic impact of purchase decisions. 

The results were remarkable, with firms reporting significant value relative to cost.

  • Cost savings from reduced seasonal overhiring in retail: $500,000
  • Cost savings from reduced healthcare fraud: $10 million
  • Increased revenue from improved demand forecasting in manufacturing: $50 million – $200 million
  • Significant cost savings from avoidance of hiring a data science team 3x as large

Determining ROI of AI

Many of our customers ask for help in estimating the value of AI when it’s being used to augment an existing process where some predictive analytics are already in place. The methodology in this report, drawing on data from 4 real-world customer deployments of AI, should provide a useful framework for anyone looking to justify an AI investment.

Today’s business climate has never been so challenging, and organizations need agile trusted solutions that can steer intelligent decision-making in the face of market turbulence and continuously evolving customer needs. 

DataRobot’s end-to-end enterprise AI platform addresses these demands. By preparing data, automating the training of machine learning models, operationalizing AI, and continuously monitoring AI assets, DataRobot enables organizations to enhance prediction accuracy, accelerate time to insight, reduce risk, and increase revenue – all without requiring heavy investment in data science teams. 

Read the study to learn more.

Study
Forrester Total Economic Impact™ Study of DataRobot

The post Forrester Total Economic Impact Study of DataRobot: 514% ROI and Payback within 3 Months appeared first on DataRobot.



from Blog – DataRobot https://ift.tt/2FLCs0k
via Gabe's MusingsGabe's Musings

Tuesday, August 11, 2020

Find Your Path to AI Success with DataRobot Pathfinder

Introducing DataRobot Pathfinder 

The potential impact AI will have on the global economy has been estimated to numbers so large — about $13 trillion by 2030 (McKinsey) – that they have become hard to conceptualize on the local level. Even with all the research and media around AI, many executives are still perplexed as to how AI can improve the profitability of their individual organizations. From the difficulties of choosing the right use case to the intricacies of adopting predictions into existing workflows, AI comes with a wide array of challenges that prevent organizations from maximizing the returns on their AI investments.

Enter DataRobot Pathfinder, the industry’s leading resource for practical guidance on how to effectively implement AI. Pathfinder is an online library of more than 100 use cases dedicated to helping you discover relevant use cases and understanding how they can be deployed within your organization. These use cases include agnostic applications of AI, such as forecasting demand, scoring new leads, and reducing customer churn. They also include industry-specific applications of AI, such as reducing hospital readmissions, predicting insurance claims’ severity, and preventing anti-money laundering. 

Pathfinder eliminates the noise to show you what you should prioritize in your roadmap. The tool is also developed around a proven technical and business framework to show how you can implement AI. By familiarizing yourself with this framework, you can understand how to apply AI to even more unique problems that your organization faces. 

With its combination of actionable use cases and applicable best practices, Pathfinder helps you expand your adoption of AI. The tool not only helps data scientists uncover more efficient ways to build and deploy models, but also shows business executives and analysts how they can embed predictions into their decision processes. The content shown on Pathfinder is driven by the collective knowledge of more than 35 of DataRobot’s data scientists, and compiles the knowledge these data scientists have accumulated through decades of academic and industry experience. 

Discover Use Cases

Pathfinder is home to 100+ use cases across more than ten industries. Our data scientists have curated the use cases on Pathfinder to ensure that they are not only relevant but also representative of their industries. DataRobot does not share customer data, personal data, or sensitive use cases on Pathfinder. 

You can benefit from Pathfinder by discovering use cases which your organization may not have considered before. You can filter the use cases based on their business and technical parameters. This includes the ability to filter by industry, problem type (i.e., regression, classification, time series), and value driver, which helps you find use cases that align with high-level business objectives, such as increasing revenue, decreasing costs, and improving customer experience. 

The tabs displaying each use case are designed for you to intuitively explore and identify relevant use cases, which can be sorted based on their popularity, impact, and complexity. While these are broad generalizations that need to be followed up by a deeper analysis of your organization’s unique position, they can give you a quick, high-level overview of which use cases may fit with your organization’s current data science maturity. For instance, organizations only beginning to implement AI should select low-hanging fruits, such as use cases that are high in impact but low in complexity. 

pathfinder 1

Understand Use Cases

Pathfinder’s use cases are currently segmented by Executive Summary and End-to-End. While Executive Summary only offers high-level overviews of a use case, End-to-End provides detailed guides on how a use case can be implemented from both technical and business perspectives. Pathfinder has more than 30 End-to-End use cases, and more are being released on a regular basis. 

With Pathfinder’s End-to-End guides, you can understand how to build models for specific use cases and deploy their predictions into your decision environments. 

On the technical side, Pathfinder helps you frame use cases into data science problems and reveals the features you will need to train your models. The tool also shows step-by-step tutorials on how to interpret and evaluate models using the DataRobot AI platform. 

On the business side, decision environments are the methods by which predictions will ultimately be used for business decision-making. Pathfinder will guide you towards understanding which stakeholders should consume the predictions, how they will consume the predictions, and what actionable decisions they can make to impact real-world business processes. 

pathfinder 2

Share Use Cases 

Once you have explored Pathfinder and identified use cases relevant to your organization, you can shortlist these use cases by bookmarking them. These bookmarks will appear on your personal AI roadmap and be saved for future viewing. You can easily share your roadmap with colleagues, who can then curate your list and select specific use cases to add to their own roadmap. 

pathfinder 3

Find Your Path to AI Success Today 

Pathfinder is available for free to the general public. Whether you are just starting your journey or are years into deploying use cases, Pathfinder will help you uncover new opportunities to expand your AI footprint. We are thrilled to be a small part of your efforts to generate greater value for your organization through automation and optimization. 

Use Case Library
Find your path to AI success with Pathfinder

100+ practical use cases at your fingertips

The post Find Your Path to AI Success with DataRobot Pathfinder appeared first on DataRobot.



from Blog – DataRobot https://ift.tt/3isOMkf
via Gabe's MusingsGabe's Musings