Translate

Pages

Pages

Pages

Intro Video

Sunday, September 20, 2020

InterSystems IRIS – the All-Purpose Universal Platform for Real-Time AI/ML

InterSystems IRIS – the All-Purpose Universal Platform for Real-Time AI/ML

Author: Sergey Lukyanchikov

Challenges of real-time AI/ML computations

We will start from the examples that we faced as Data Science practice at InterSystems:

  • A “high-load” customer portal is integrated with an online recommendation system. The plan is to reconfigure promo campaigns at the level of the entire retail network (we will assume that instead of a “flat” promo campaign master there will be used a “segment-tactic” matrix). What will happen to the recommender mechanisms? What will happen to data feeds and updates into the recommender mechanisms (the volume of input data having increased 25000 times)? What will happen to recommendation rule generation setup (the need to reduce 1000 times the recommendation rule filtering threshold due to a thousandfold increase of the volume and “assortment” of the rules generated)?
  • An equipment health monitoring system uses “manual” data sample feeds. Now it is connected to a SCADA system that transmits thousands of process parameter readings each second. What will happen to the monitoring system (will it be able to handle equipment health monitoring on a second-by-second basis)? What will happen once the input data receives a new bloc of several hundreds of columns with data sensor readings recently implemented in the SCADA system (will it be necessary, and for how long, to shut down the monitoring system to integrate the new sensor data in the analysis)?
  • A complex of AI/ML mechanisms (recommendation, monitoring, forecasting) depend on each other’s results. How many man-hours will it take every month to adapt those AI/ML mechanisms’ functioning to changes in the input data? What is the overall “delay” in supporting business decision making by the AI/ML mechanisms (the refresh frequency of supporting information against the feed frequency of new input data)?

Summarizing these and many other examples, we have come up with a formulation of the challenges that materialize because of transition to using machine learning and artificial intelligence in real time:

  • Are we satisfied with the creation and adaptation speed (vs. speed of situation change) of AI/ML mechanisms in our company?
  • How well our AI/ML solutions support real-time business decision making?
  • Can our AI/ML solutions self-adapt (i.e., continue working without involving developers) to a drift in the data and resulting business decision-making approaches?

This article is a comprehensive overview of InterSystems IRIS platform capabilities relative to universal support of AI/ML mechanism deployment, of AI/ML solution assembly (integration) and of AI/ML solution training (testing) based on intense data flows. We will turn to market research, to practical examples of AI/ML solutions and to the conceptual aspects of what we refer to in this article as real-time AI/ML platform.

What surveys show: real-time application types

The results of the survey conducted by Lightbend in 2019 among some 800 IT professionals, speak for themselves:

Figure 1 Leading consumers of real-time data

We will quote the most important for us fragments from the report on the results of that survey:

“… The parallel growth trends for streaming data pipelines and container-based infrastructure combine

to address competitive pressure to deliver impactful results faster, more efficiently and with greater agility. Streaming enables extraction of useful information from data more quickly than traditional batch processes. It also enables timely integration of advanced analytics, such as recommendations based on artificial intelligence and machine learning (AI/ML) models, all to achieve competitive differentiation through higher customer satisfaction. Time pressure also affects the DevOps teams building and deploying applications. Container-based infrastructure, like Kubernetes, eliminates many of the inefficiencies and design problems faced by teams that are often responding to changes by building and deploying applications rapidly and repeatedly, in response to change. … Eight hundred and four IT professionals provided details about applications that use stream processing at their organizations. Respondents were primarily from Western countries (41% in Europe and 37% in North America) and worked at an approximately equal percentage of small, medium and large organizations. …

… Artificial intelligence is more than just speculative hype. Fifty-eight percent of those already using stream processing in production AI/ML applications say it will see some of the greatest increases in the next year.

  • The consensus is that AI/ML use cases will see some of the largest increases in the next year.
  • Not only will adoption widen to different use cases, it will also deepen for existing use cases, as real-time data processing is utilized at a greater scale.
  • In addition to AI/ML, enthusiasm among adopters of IoT pipelines is dramatic — 48% of those already incorporating IoT data say this use case will see some of the biggest near-term growth. … “

This quite interesting survey shows that the perception of machine learning and artificial intelligence scenarios as leading consumers of real-time data, is already “at the doorstep”. Another important takeaway is the perception of AI/ML through DevOps prism: we can already now state a transformation of the still predominant “one-off AI/ML with a fully known dataset” culture.

A real-time AI/ML platform concept

One of the most typical areas of use of real-time AI/ML is manufacturing process management in the industries. Using this area as an example and considering all the above ideas, let us formulate the real-time AI/ML platform concept.

Use of artificial intelligence and machine learning for the needs of manufacturing process management has several distinctive features:

  • Data on the condition of a manufacturing process is generated very intensely: at high frequency and over a broad range of parameters (up to tens of thousands parameter values transmitted per second by a SCADA system)
  • Data on detected defects, not to mention evolving defects, on contrary, is scarce and occasional, is known to have insufficient defect categorization as well as localization in time (usually, is found in the form of manual records on paper)
  • From a practical standpoint, only an “observation window” is available for model training and application, reflecting process dynamics over a reasonable moving interval that ends with the most recent process parameter readings

These distinctions make us (besides reception and basic processing in real time of an intense “broadband signal” from a manufacturing process) execute (in parallel) AI/ML model application, training and accuracy control in real time, too. The “frame” that our models “see” in the moving observation window is permanently changing – and the accuracy of the AI/ML models that were trained on one of the previous “frames” changes also. If the AI/ML modeling accuracy degrades (e.g., the value of the “alarm-norm” classification error surpassed the given tolerance boundaries) a retraining based on a more recent “frame” should be triggered automatically – while the choice of the moment for the retraining start must consider both the retrain procedure duration and the accuracy degradation speed of the current model versions (because the current versions go on being applied during the retrain procedure execution until the “retrained” versions of the models are obtained).

InterSystems IRIS possesses key in-platform capabilities for supporting real-time AI/ML solutions for manufacturing process management. These capabilities can be grouped in three major categories:

  • Continuous Deployment/Delivery (CD) of new or modified existing AI/ML mechanisms in a production solution functioning in real time based on InterSystems IRIS platform
  • Continuous Integration (CI) of inbound process data flows, AI/ML model application/training/accuracy control queues, data/code/orchestration around real-time interactions with mathematical modeling environments – in a single production solution in InterSystems IRIS platform
  • Continuous Training (CT) of AI/ML mechanisms performed in mathematical modeling environments using data, code, and orchestration (“decision making”) passed from InterSystems IRIS platform

The grouping of platform capabilities relative to machine learning and artificial intelligence into the above categories is not casual. We quote a methodological publication by Google that gives a conceptual basis for such a grouping:

“… DevOps is a popular practice in developing and operating large-scale software systems. This practice provides benefits such as shortening the development cycles, increasing deployment velocity, and dependable releases. To achieve these benefits, you introduce two concepts in the software system development:

  • Continuous Integration (CI)
  • Continuous Delivery (CD)

An ML system is a software system, so similar practices apply to help guarantee that you can reliably build and operate ML systems at scale.

However, ML systems differ from other software systems in the following ways:

  • Team skills: In an ML project, the team usually includes data scientists or ML researchers, who focus on exploratory data analysis, model development, and experimentation. These members might not be experienced software engineers who can build production-class services.
  • Development: ML is experimental in nature. You should try different features, algorithms, modeling techniques, and parameter configurations to find what works best for the problem as quickly as possible. The challenge is tracking what worked and what didn't, and maintaining reproducibility while maximizing code reusability.
  • Testing: Testing an ML system is more involved than testing other software systems. In addition to typical unit and integration tests, you need data validation, trained model quality evaluation, and model validation.
  • Deployment: In ML systems, deployment isn't as simple as deploying an offline-trained ML model as a prediction service. ML systems can require you to deploy a multi-step pipeline to automatically retrain and deploy model. This pipeline adds complexity and requires you to automate steps that are manually done before deployment by data scientists to train and validate new models.
  • Production: ML models can have reduced performance not only due to suboptimal coding, but also due to constantly evolving data profiles. In other words, models can decay in more ways than conventional software systems, and you need to consider this degradation. Therefore, you need to track summary statistics of your data and monitor the online performance of your model to send notifications or roll back when values deviate from your expectations.

ML and other software systems are similar in continuous integration of source control, unit testing, integration testing, and continuous delivery of the software module or the package. However, in ML, there are a few notable differences:

  • CI is no longer only about testing and validating code and components, but also testing and validating data, data schemas, and models.
  • CD is no longer about a single software package or a service, but a system (an ML training pipeline) that should automatically deploy another service (model prediction service).
  • CT is a new property, unique to ML systems, that's concerned with automatically retraining and serving the models. …”

We can conclude that machine learning and artificial intelligence that are used with real-time data require a broader set of instruments and competences (from code development to mathematical modeling environment orchestration), a tighter integration among all the functional and subject domains, a better management of human and machine resources.

A real-time scenario: recognition of developing defects in feed pumps

Continuing to use the area of manufacturing process management, we will walk through a practical case (already referenced in the beginning): there is a need to set up a real-time recognition of developing defects in feed pumps based on a flow of manufacturing process parameter values as well as on maintenance personnel’s reports on detected defects.

Figure 2 Developing defect recognition case formulation

One of the characteristics of many similar cases, in practice, is that regularity and timeliness of the data feeds (SCADA) need to be considered in line with episodic and irregular detection (and recording) of various defect types. In different words: SCADA data is fed once a second all set for analysis, while defects are recorded using a pencil in a copybook indicating a date (for example: “Jan 12 – leakage into cover from 3rd bearing zone”).

Therefore, we could complement the case formulation by adding the following important restriction: we have only one “fingerprint” of a concrete defect type (i.e. the concrete defect type is represented by the SCADA data as of the concrete date – and we have no other examples for this particular defect type). This restriction immediately sets us outside of the classical machine learning paradigm (supervised learning) that presumes that “fingerprints” are available in large quantity.

Figure 3 Elaborating the defect recognition case formulation

Can we somehow “multiply” the “fingerprint” that we have available? Yes, we can. The current condition of the pump is characterized by its similarity to the already recorded defects. Even without quantitative methods applied, just by observing the dynamics of the parameter values received from the SCADA system, much could be learnt:

Figure 4 Pump condition dynamics vs. the concrete defect type “fingerprint”

However, visual perception (at least, for now) – is not the most suitable generator of machine learning “labels” in our dynamically progressing scenario. We will be estimating the similarity of the current pump condition to the already recorded defects using a statistical test.

Figure 5 A statistical test applied to incoming data vs. the defect “fingerprint”

The statistical test estimates a probability for a set of records with manufacturing process parameter values, acquired as a “batch” from the SCADA system, to be similar to the records from the concrete defect “fingerprint”. The probability estimated using the statistical test (statistical similarity index) is then transformed to either 0 or 1, becoming the machine learning “label” in each of the records of the set that we evaluate for similarity. I.e., once the acquired batch of pump condition records are processed using the statistical test, we obtain the capacity to (a) add that batch to the training dataset for AI/ML models and (b) to assess the accuracy of AI/ML model current versions when applied to that batch.

Figure 6 Machine learning models applied to incoming data vs. the defect “fingerprint”

In one of our previous webinars we show and explain how InterSystems IRIS platform allows implementing any AI/ML mechanism as continually executed business processes that control the modeling output likelihood and adapt the model parameters. The implementation of our pumps scenario relies on the complete InterSystems IRIS functionality presented in the webinar – using in the analyzer process, part of our solution, reinforcement learning through automated management of the training dataset, rather than classical supervised learning. We are adding to the training dataset the records that demonstrate “detection consensus” after being applied both the statistical test (with the similarity index transformed to either 0 or 1) and the current version of the model – i.e. both the statistical test and the model have produced on such records the output of 1. At model retraining, at its validation (when the newly trained model is applied to its own training dataset, after a prior application of the statistical test to that dataset), the records that “failed to maintain” the output of 1 once the statistical test applied to them (due to a permanent presence in the training dataset of the records belonging to the original defect “fingerprint”) are removed from the training dataset, and a new version of the model is trained on the defect “fingerprint” plus the records from the flow that “succeeded”.

Figure 7 Robotization of AI/ML computations in InterSystems IRIS

In the case of a need to have a “second opinion” on the detection accuracy obtained through local computations in InterSystems IRIS, we can create an advisor process to redo the model training/application on a control dataset using cloud providers (for example: Microsoft Azure, Amazon Web Services, Google Cloud Platform, etc.):

Figure 8 "Second opinion" from Microsoft Azure orchestrated by InterSystems IRIS

The prototype of our scenario is implemented in InterSystems IRIS as an agent system of analytical processes interacting with the piece of equipment (the pump), the mathematical modeling environments (Python, R and Julia), and supporting self-training of all the involved AI/ML mechanisms – based on real-time data flows.

Figure 9 Core functionality of the real-time AI/ML solution in InterSystems IRIS

Some practical results obtained due to our prototype:

  • The defect’s “fingerprint” detected by the models (January 12th):

  • The developing defect not included in the “fingerprints” known to the prototype, detected by the models (September 11th, while the defect itself was discovered by a maintenance brigade two days later – on September 13th):

A simulation on real-life data containing several occurrences of the same defect has shown that our solution implemented using InterSystems IRIS platform can detect a developing defect several days before it is discovered by a maintenance brigade.

InterSystems IRIS - the all-purpose universal platform for real-time AI/ML computations

InterSystems IRIS is a complete, unified platform that simplifies the development, deployment, and maintenance of real-time, data-rich solutions. It provides concurrent transactional and analytic processing capabilities; support for multiple, fully synchronized data models (relational, hierarchical, object, and document); a complete interoperability platform for integrating disparate data silos and applications; and sophisticated structured and unstructured analytics capabilities supporting batch and real-time use cases. The platform also provides an open analytics environment for incorporating best-of-breed analytics into InterSystems IRIS solutions, and it offers flexible deployment capabilities to support any combination of cloud and on-premises deployments.

Applications powered by InterSystems IRIS platform are currently in use with various industries helping companies receive tangible economic benefits in strategic and tactical run, fostering informed decision making and removing the “gaps” among event, analysis, and action.

Figure 10 InterSystems IRIS architecture in the real-time AI/ML context

Same as the previous diagram, the below diagram combines the new “basis” (CD/CI/CT) with the information flows among the working elements of the platform. Visualization begins with CD macromechanism and continues through CI/CT macromechanisms.

Figure 11 Diagram of information flows among AI/ML working elements of InterSystems IRIS platform

The essentials of CD mechanism in InterSystems IRIS: the platform users (the AI/ML solution developers) adapt the already existing and/or create new AI/ML mechanisms using a specialized AI/ML code editor: Jupyter (the full title: Jupyter Notebook; for brevity, the documents created in this editor are also often called by the same title). In Jupyter, a developer can write, debug and test (using visual representations, as well) a concrete AI/ML mechanism before its transmission (“deployment”) to InterSystems IRIS. It is clear that the new mechanism developed in such a manner will enjoy only a basic debugging (in particular, because Jupyter does not handle real-time data flows) – but we are fine with that since the main objective of developing code in Jupyter is verification, in principle, of the functioning of a separate AI/ML mechanism. In a similar fashion, an AI/ML mechanism already deployed in the platform (see the other macromechanisms) may require a “rollback” to its “pre-platform” version (reading data from files, accessing data via xDBC instead of local tables or globals – multi-dimensional data arrays in InterSystems IRIS – etc.) before debugging.

An important distinctive aspect of CD implementation in InterSystems IRIS: there is a bidirectional integration between the platform and Jupyter that allows deploying in the platform (with a further in-platform processing) Python, R and Julia content (all the three being programming languages of their respective open-source mathematical modeling leader environments). That said, AI/ML content developers obtain a capability to “continuously deploy” their content in the platform while working in their usual Jupyter editor with usual function libraries available through Python, R, Julia, delivering basic debugging (in case of necessity) outside the platform.

Continuing with CI macromechanism in InterSystems IRIS. The diagram presents the macroprocess for a “real-time robotizer” (a bundle of data structures, business processes and fragments of code in mathematical environment languages, as well as in ObjectScript – the native development language of InterSystems IRIS – orchestrated by them). The objective of the macroprocess is: to support data processing queues required for the functioning of AI/ML mechanisms (based on the data flows transmitted into the platform in real time), to make decisions on sequencing and “assortment” of AI/ML mechanisms (a.k.a. “mathematical algorithms”, “models”, etc. – can be called in a number of different ways depending on implementation specifics and terminology preferences), to keep up to date the analytical structures for intelligence around AI/ML outputs (cubes, tables, multidimensional data arrays, etc. – resulting into reports, dashboards, etc.).

An important distinctive aspect of CI implementation in InterSystems IRIS: there is a bidirectional integration among the platform and mathematical modeling environments that allows executing in-platform content written in Python, R or Julia in the respective environments and receiving back execution results. That integration works both in a “terminal mode” (i.e., the AI/ML content is formulated as ObjectScript code performing callouts to mathematical environments), and  in a “business process mode” (i.e., the AI/ML content is formulated as a business process using the visual composer, or, sometimes, using Jupyter, or, sometimes, using an IDE – IRIS Studio, Eclipse, Visual Studio Code). The availability of business processes for editing in Jupyter is specified using a link between IRIS within CI layer and Jupyter within CD layer. A more detailed overview of integration with mathematical modeling environments is provided further in this text. At this point, in our opinion, there are all reasons to state the availability in the platform of all the tooling required for implementing “continuous integration” of AI/ML mechanisms (originating from “continuous deployment”) into real-time AI/ML solutions.

And finally, the crucial macromechanism: CT. Without it, there will be no AI/ML platform (even if “real time” can be implemented via CD/CI). The essence of CT is the ability of the platform to operate the “artifacts” of machine learning and artificial intelligence directly in the sessions of mathematical modeling environments: models, distribution tables, vectors/matrices, neural network layers, etc. This “interoperability”, in the majority of the cases, is manifested through creation of the mentioned artifacts in the environments (for example, in the case of models, “creation” consists of model specification and subsequent estimation of its parameters – the so-called “training” of a model), their application (for models: computation with their help of the “modeled” values of target variables – forecasts, category assignments, event probabilities, etc.), and improvement of the already created plus applied artifacts (for example, through re-definition of the input variables of a model based on its performance  – in order to improve forecast accuracy, as one possible option). The key property of CT role is its “abstraction” from CD and CI reality: CT is there to implement all the artifacts using computational and mathematical specifics of an AI/ML solution, within the restrictions existing in concrete environments. The responsibility for “input data supply” and “outputs delivery” will be borne by CD and CI.

An important distinctive aspect of CT implementation in InterSystems IRIS: using the above-mentioned integration with mathematical modeling environments, the platform can extract their artifacts from sessions in the mathematical environments orchestrated by it, and (the most important) convert them into in-platform data objects. For example, a distribution table just created in a Python session can be (without pausing the Python session) transferred into the platform as, say, a global (a multidimensional data array in InterSystems IRIS) – and further re-used for computations in a different AI/ML mechanism (implemented using the language of a different environment – like R) – or as a virtual table. Another example: in parallel with “routine” functioning of a model (in a Python session), its input dataset is processed using “auto ML” – an automated search for optimized input variables and model parameters. Together with “routine” training, the production model receives in real time “optimization suggestions” as to basing its specification on an adjusted set of input variables, on adjusted model parameter values (no longer as an outcome of training in Python, but as the outcome of training of an “alternative” version of it using, for example, H2O framework), allowing the overall  AI/ML solution to handle in an autonomous way unforeseen drift in the input data and in the modeled objects/processes.

We will now take a closer look at the in-platform AI/ML functionality of InterSystems IRIS using an existing prototype as example.

In the below diagram, in the left part of the image we see the fragment of a business process that implements execution of Python and R scripts. In the central part – we see the visual logs following execution of those scripts, in Python and in R accordingly. Next after them – examples of the content in both languages, passed for execution in respective environments. In the right part – visualizations based on the script outputs. The visualizations in the upper right corner are developed using IRIS Analytics (the data is transferred from Python to InterSystems IRIS platform and is put on a dashboard using platform functionality), in the lower right corner – obtained directly in R session and transferred from there to graphical files. An important remark: the discussed business process fragment is responsible in this prototype for model training (equipment condition classification) based on the data received in real time from the equipment imitator process, that is triggered by the classification accuracy monitor process that monitors performance of the classification model as it is being applied. Implementing an AI/ML solution as a set of interacting business processes (“agents”) will be discussed further in the text.

Figure 12 Interaction with Python, R and Julia in InterSystems IRIS

In-platform processes (a.k.a. “business processes”, “analytical processes”, “pipelines”, etc.– depending on the context) can be edited, first of all, using the visual business process composer in the platform, in such a way that both the process diagram and its corresponding AI/ML mechanism (code) are created at the same time. By saying “an AI/ML mechanism is created”, we mean hybridity from the very start (at a process level): the content written in the languages of mathematical modeling environments neighbors the content written in SQL (including IntegratedML extensions), in InterSystems ObjectScript, as well as other supported languages. Moreover, the in-platform paradigm opens a very wide spectrum of capability for “drawing” processes as sets of embedded fragments (as shown in the below diagram), helping with efficient structuring of sometimes rather complex content, avoiding “dropouts” from visual composition (to “non-visual” methods/classes/procedures, etc.). I.e., in case of necessity (likely in most projects), the entire AI/ML solution can be implemented in a visual self-documenting format. We draw your attention to the central part of the below diagram that illustrates a “higher-up embedding layer” and shows that apart from model training as such (implemented using Python and R), there is analysis of the so-called ROC curve of the trained model allowing to assess visually (and computationally) its training quality – this analysis is implemented using Julia language (executes in its respective Julia environment).

Figure 13 Visual AI/ML solution composition environment in InterSystems IRIS

As mentioned before, the initial development and (in other cases) adjustment of the already implemented in-platform AI/ML mechanisms will be performed outside the platform in Jupyter editor. In the below diagram we can find an example of editing an existing in-platform process (the same process as in the diagram above) – this is how its model training fragment looks in Jupyter. The content in Python language is available for editing, debugging, viewing inline graphics in Jupyter. Changes (if required) can be immediately replicated to the in-platform process, including its production version. Similarly, newly developed content can be replicated to the platform (a new in-platform process is created automatically).

Figure 14 Using Jupyter Notebook to edit an in-platform AI/ML mechanism in InterSystems IRIS

Editing of an in-platform process can be performed not only in a visual or a notebook format – but in a “complete” IDE (Integrated Development Environment) format as well. The IDEs being IRIS Studio (the native IRIS development studio), Visual Studio Code (an InterSystems IRIS extension for VSCode) and Eclipse (Atelier plugin). In certain cases, simultaneous usage by a development team of all the three IDEs is possible. In the diagram below we see an example of editing all the same process in IRIS Studio, in Visual Studio Code and in Eclipse. Absolutely any portion of the content is available for editing: Python/R/Julia/SQL, ObjectScript and the business process elements.

Figure 15 Editing of an InterSystems IRIS business process in various IDE

The means of composition and execution of business processes in InterSystems IRIS using Business Process Language (BPL), are worth a special mentioning. BPL allows using “pre-configured integration components” (activities) in business processes – which, properly speaking, give us the right to state that IRIS supports “continuous integration”. Pre-configured business process components (activities and links among them) are extremely powerful accelerators for AI/ML solution assembly. And not only for assembly: due to activities and their links, an “autonomous management layer” is introduced above disparate AI/ML mechanisms, capable of making real-time decisions depending on the situation.

Figure 16 Pre-configured business process components for continuous integration (CI) in InterSystems IRIS platform

The concept of agent systems (a.k.a. “multiagent systems”) has strong acceptance in robotization, and InterSystems IRIS platform provides organic support for it through its “production/process” construct. Besides unlimited capabilities for “arming” each process with the functionality required for the overall solution, “agency” as the property of an in-platform processes family, enables creation of efficient solutions for very unstable modeled phenomena (behavior of social/biological systems, partially observed manufacturing processes, etc.).

Figure 17 Functioning AI/ML solution in the form of an agent system of business processes in InterSystems IRIS

We proceed with our overview of InterSystems IRIS platform by presenting applied use domains containing solutions for entire classes of real-time scenarios (a fairly detailed discovery of some of the in-platform AI/ML best practices based on InterSystems IRIS is provided in one of our previous webinars).

In “hot pursuit” of the above diagram, we provide below a more illustrative diagram of an agent system. In that diagram, the same all prototype is shown with its four agent processes plus the interactions among them: GENERATOR – simulates data generation by equipment sensors, BUFFER – manages data processing queues, ANALYZER – executes machine learning, properly speaking, MONITOR – monitors machine learning quality and signals the necessity for model retrain.

Figure 18 Composition of an AI/ML solution in the form of an agent system of business processes in InterSystems IRIS

The diagram below illustrates the functioning of a different robotized prototype (text sentiment analysis) over a period. In the upper part – the model training quality metric evolution (quality increasing), in the lower part – dynamics of the model application quality metric and retrains (red stripes). As we can see, the solution has shown an effective and autonomous self-training while continuing to function at the required level of quality (the quality metric values stay above 80%).

Figure 19 Continuous (self-)training (CT) based on InterSystems IRIS platform

We were already mentioning “auto ML” before, and in the below diagram we are now providing more details about this functionality using one other prototype as an example. In the diagram of a business process fragment, we see an activity that launches modeling in H2O framework, as well as the outcomes of that modeling (a clear supremacy of the obtained model in terms of ROC curves, compared to the other “hand-made” models, plus automated detection of the “most influential variables” among the ones available in the original dataset). An important aspect here is the saving of time and expert resources that is gained due to “auto ML”: our in-platform process delivers in half a minute what may take an expert from one week to one month (determining and proofing of an optimal model).

Figure 20 “Auto ML” embedded in an AI/ML solution based on InterSystems IRIS platform

The diagram below “brings down the culmination” while being a sound option to end the story about the classes of real-time scenarios: we remind that despite all the in-platform capabilities of InterSystems IRIS, training models under its orchestration is not compulsory. The platform can receive from an external source a so-called PMML specification of a model that was trained in an instrument that is not being orchestrated by the platform – and then keep applying that model in real time from the moment of its PMML specification import. It is important to keep in mind that not every given AI/ML artifact can be resolved into a PMML specification, although the majority of the most widely used AI/ML artifacts allow doing this. Therefore, InterSystems IRIS platform has an “open circuit” and means zero “platform slavery” for its users.

Figure 21 Model application based on its PMML specification in InterSystems IRIS platform

Let us mention the additional advantages of InterSystems IRIS platform (for a better illustration, with reference to manufacturing process management) that have major importance for real-time automation of artificial intelligence and machine learning:

  • Powerful integration framework for interoperability with any data sources and data consumers (SCADA, equipment, MRO, ERP, etc.)
  • Built-in multi-model database management system for high-performance hybrid transactional and analytical processing (HTAP) of unlimited volume of manufacturing process data
  • Development environment for continuous deployment of AI/ML mechanisms into real-time solutions based on Python, R, Julia
  • Adaptive business processes for continuous integration into real-time solutions and (self-)training of AI/ML mechanisms
  • Built-in business intelligence capabilities for manufacturing process data and AI/ML solution outputs visualization
  • API Management to deliver AI/ML outputs to SCADA, data marts/warehouses, notification engines, etc.

AI/ML solutions implemented in InterSystems IRIS platform easily adapt to existing IT infrastructure. InterSystems IRIS secures high reliability of AI/ML solutions due to high availability and disaster recovery configuration support, as well as flexible deployment capability in virtual environments, at physical servers, in private and public clouds, in Docker containers.

That said, InterSystems IRIS is indeed the all-purpose universal platform for real-time AI/ML computations. The all-purpose nature of our platform is proven in action through the de-facto absence of restrictions on the complexity of implemented computations, the ability of InterSystems IRIS to combine (in real time) execution of scenarios from various industries, the exceptional adaptability of any in-platform functions and mechanisms to concrete needs of the users.

Figure 22 InterSystems IRIS - the all-purpose universal platform for real-time AI/ML computations

For a more specific dialog with those of our audience that found this text interesting, we would recommend proceeding to a “live” communication with us. We will readily provide support with formulation of real-time AI/ML scenarios relevant to your company specifics, run collaborative prototyping based on InterSystems IRIS platform, design and execute a roadmap for implementation of artificial intelligence and machine learning in your manufacturing and management processes. The contact e-mail of our AI/ML expert team – MLToolkit@intersystems.com.



from Featured Blog Posts - Data Science Central https://ift.tt/2FMPeMn
via Gabe's MusingsGabe's Musings

Man taken into custody after NYC train derailment

The incident happened in in the West Village on Sunday morning

A subway train in New York City derailed on Sunday after investigators say debris threw it off its track. 

The incident happened at the 14 St./Eighth Ave. station in the West Village at 8:17 a.m. 

Commuters say they saw a man who appeared to be homeless and mentally ill laughing after he threw something onto the tracks. 

Read More: Video shows man knocked out on London subway after racist rant

Police took the 30-year-old suspect into custody for questioning. No charges were immediately filed.

The New York Post reported that good samaritans grabbed the man suspected of throwing the metal construction pieces onto the train tracks and held him until police arrived.

According to the NY Daily News, investigators say the northbound A train ran into metal tie plates, also known as D plates about 50 feet into the station.

The tie plates are used to secure tracks to the roadbed. The first subway car derailed, with at least one wheel leaving the tracks and sideswiped seven metal columns. 

A source from the NYPD told the NY Daily News that MTA workers have been told in the past not to leave materials near tracks. 

People rush through the New York City subway system at rush hour on August 14, 2013 in New York City. (Photo by Andrew Burton/Getty Images)

Investigators say about 300 feet of the third rail collapsed during the derailment, which knocked out power to all four tracks in the station. 

Another 200 feet of track sustained heavy damages.

The Metropolitan Transit Authority’s chief safety officer, Pat Warren said there were 135 people on the A train at the time of the incident. They were evacuated.

Read More: New York City politicians sue governor, mayor to end ban on indoor dining

A Fire Department spokesperson said three passengers suffered injuries, all minor. One person was taken to a hospital for evaluation.

An uptown train with dozens of people onboard got stuck in a nearby tunnel due to the loss of power stemming from the derailment. 

Have you subscribed to theGrio’s podcast “Dear Culture”? Download our newest episodes now!

The post Man taken into custody after NYC train derailment appeared first on TheGrio.



from TheGrio https://ift.tt/35UIeIg
via Gabe's Musing's

Did Robert F. Smith use Black America?

OPINION: A criminal tax investigation could reveal that Smith’s $40 million gift to Morehouse may have been a strategy to achieve leniency.

Robert F. Smith, the businessman, philanthropist and the wealthiest Black man in America, gained a great deal of attention and accolades when he pledged to pay off the entire student loan debt of the Morehouse College class of 2019. Tweets went out in praise. Memes of Black folk changing their degrees to “Morehouse ‘19” went up. And a collective sense of pride filled many chests as they saw the gift as the perfect example of “taking care of our own.”

Smith vowed to pay off the college loans incurred by the parents of these 400 young men — up to $50,000 per family — for a total of $34 million. No one needs to tell you that is a lot of money, particularly for Black folks who pursue education only through great sacrifice and financial hardship that others in this country cannot begin to conceive.

When Smith gave his Morehouse commencement address, what many didn’t know is that he was being investigated by the Justice Department and the IRS for possible tax offenses, including allegations that he neglected to pay taxes on $200 million in assets, proceeds from his first private equity fund that moved through offshore structures in the Caribbean. 

Read More: Billionaire Robert Smith investigated by feds for possible criminal charges

Smith is reportedly attempting to reach a civil settlement with the government, and previously tried to gain entrance to an IRS amnesty program to avoid prosecution in 2014 when the IRS first investigated him. He was turned down. Under the program–which provides amnesty to taxpayers who failed to report offshore accounts—the IRS reportedly turns down applicants it already knows did not report those assets, according to Bloomberg. Smith was one of them.

The investigation against Smith is of a criminal nature. The feds have focused on the movement of funds from two offshore accounts into Smith’s charitable foundation in 2014. Investigators have also zeroed in on the winding down of Smith’s first private equity fund that year, which also coincided with his divorce.  

One of the many questions Smith’s philanthropy raises is whether he contributed these millions with the knowledge that news of the tax allegations would come out. Was there any aspect of these initiatives that were an effort to fortify his image in the Black community, and the community at-large? How much did Robert Smith donate to Black America before he had a severe tax problem in 2014?  How should we view his generous contribution to Morehouse in light of the tax probe?  

While these questions may seem uncomfortable for some of us to ask, they challenge us to think about how the wealthy influence many aspects of our society and the true reasons for their charitable gifts.

As Jelani Cobb suggested, philanthropy is a “penance mechanism” for those who know they’ve done wrong, or serves to hide their foibles by causing people to focus on their charity.            

Given that Smith faces a criminal investigation, a question that remains is whether he should be criminally charged. Offshore tax havens for corporations and the rich are a real problem, with $36 trillion and 10% or more of global GDP in untaxed money stashed away each year. By comparison, the U.S. government takes in $3 trillion in annual revenue. At a time when millions are suffering financially under the coronavirus pandemic and governments face economic turmoil, there must be accountability for those who are hiding vast sums of money that could help people in need.

Ultimately, the truth reigns supreme and through time, it comes to light.

But perhaps the most interesting revelation from this story, is that Smith followed the lead of another billionaire who was convicted of tax evasion- Ty Warner of Beanie Babies.

After being convicted of tax evasion and holding offshore accounts worth 104 million dollars, the judge praised Ty Warner for his charitable gifts and Warner’s legal team was able to get him a plea deal of only 5 years probation, and no jail time.  

Robert F. Smith has hired one of Warner’s former lawyers and is campaigning for a legal settlement and no criminal charges.

The donations. The same lawyer.  Is this a coincidence? Or is this a strategy that reveals the manipulation of Black America to achieve a legal outcome?

For the Black community to continue to advance, we must be willing to celebrate good deeds and gifts, without fear of considering the context of the giving— whether they be from organizations, corporations, or individuals who look like us.

The investigation of Robert F. Smith’s taxes may reveal he’s a Black man with something to hide— or a target in this nation’s taking down of another Black man in America. Our willingness to look at whatever truth may come from it, reveals the price we put on integrity.

Follow David A. Love on Twitter at @davidalove.

The post Did Robert F. Smith use Black America? appeared first on TheGrio.



from TheGrio https://ift.tt/3kAxHG5
via Gabe's Musing's

Rep. Jahana Hayes tests positive for COVID-19

The freshman Democrat from Connecticut called for a national testing strategy in a string of tweets

WATERBURY, Conn. — Rep. Jahana Hayes of Connecticut has tested positive for the new coronavirus and will quarantine for 14 days, she announced Sunday on Twitter.

“After going to 2 urgent care centers yesterday, I finally got an appointment at a 3rd site and was tested this morning,” the first-term Democrat said. Hayes said she has no COVID-19 symptoms “except for breathing issues which are being monitored.”

Rep. Jahana Hayes (D-CT) participates in a House Education and Labor Committee Markup on the H.R. 582 Raise The Wage Act, in the Rayburn House Office Building on March 6, 2019 in Washington, DC. (Photo by Mark Wilson/Getty Images)

Hayes sought testing after one of her staff members tested positive for the virus on Saturday. She reported experiencing no symptoms, except for respiratory concerns.

READ MORE: Officials change COVID testing advice, bewildering experts

Hayes, 47, said she contracted the virus despite taking “every possible precaution.” She said her experience underscores the need for a national testing strategy “with a coherent way to receive speedy, accurate results,” adding, “This level of anxiety and uncertainty is untenable.”

Hayes went on to explain in the thread that members of Congress are not regularly tested for the virus, which nearly 6.8 million people have contracted in America, and that mass testing does not exist in Washington, D.C. The virus, which causes the disease COVID-19, has been connected to more than 199,000 deaths in the country and more than deaths 958,300 globally, according to data compiled by researchers at Johns Hopkins University.

READ MORE: People who test positive for coronavirus twice as likely to have eaten at restaurant: study

“Masks, social distancing & frequent floor cleanings are the precautions that are taken in the House. I have taken every possible precaution and still contracted coronavirus,” she wrote in one tweet.

Have you subscribed to theGrio’s podcast “Dear Culture”? Download our newest episodes now!

The post Rep. Jahana Hayes tests positive for COVID-19 appeared first on TheGrio.



from TheGrio https://ift.tt/2FVDua4
via Gabe's Musing's

U.S. to hit 200K dead; Trump sees no need for regret

In the coming days, the number of U.S. deaths is set to clear the outer band of the president’s projections

As the coronavirus pandemic began bearing down on the United States in March, President Donald Trump set out his expectations.

If the U.S. could keep the death toll between 100,000 to 200,000 people, Trump said, it would indicate that his administration had “done a very good job.”

In the coming days, the number of U.S. deaths is set to clear the outer band of the president’s projections: 200,000, according to the official tally, though the real number is certainly higher. The virus continues to spread and there is currently no approved vaccine. Some public health experts fear infections could spike this fall and winter, perhaps even doubling the death count by the end of the year.

Read More: Drug shows promise in 1st largely minority COVID-19 study, company says

Yet the grim milestone and the prospect of more American deaths to come have prompted no rethinking from the president about his handling of the pandemic and no outward expressions of regrets. Instead, Trump has sought to reshape the significance of the death tally, trying to turn the loss of 200,000 Americans into a success story by contending the numbers could have been even higher without the actions of his administration.

“If we didn’t do our job, it would be three and a half, two and a half, maybe 3 million people,” Trump said Friday, leaning on extreme projections of what could have happened if nothing at all were done to fight the pandemic. “We have done a phenomenal job with respect to COVID-19.”

Trump’s reelection prospects will hinge in part on whether enough voters agree with that assessment. The challenge he faces in making his case, with just over six weeks before the Nov. 3 election and voting already underway in some states, is clear.

U.S. President Donald Trump speaks to members of the press prior to his departure from the White House on September 19, 2020 in Washington, DC. (Photo by Sarah Silbiger/Getty Images)

Just 39% of Americans approve of the president’s handling of the pandemic, according to a poll from The Associated Press-NORC Center for Public Affairs Research. Roughly one-quarter of Republicans say they don’t approve of Trump’s stewardship of the public health crisis, though his overall backing among GOP voters sits at a comfortable 84%.

There’s also little doubt that the death toll in the U.S. has soared past where Trump repeatedly assured the public it would be. In February, when the first coronavirus cases were detected in the U.S., the president said the numbers would be “down to close to zero” within day s. In early April, when U.S. officials estimated at least 100,000 people would die from the pandemic even if all conceivable steps were taken against it, Trump suggested the numbers would be lower, saying: “I think we’re doing better than that.”

He’s shifted again in recent days, saying that the U.S. remains a success story because some models showed the nation could have 240,000 deaths — a threshold that appears likely to be eclipsed by the end of the year.

Well aware of his sluggish standing with voters on the pandemic, Trump has spent recent weeks trying to refocus his race against Democrat Joe Biden on other issues, including promising white suburban voters that he would keep crime in liberal cities from encroaching on their neighborhoods.

Trump will now campaign in particular on the courts, given Friday’s death of Supreme Court Justice Ruth Bader Ginsburg, seeking to lure back Republican voters who may have turned on him during the pandemic, with the promise of more conservatives on the high court.

Read More: Almost 550 Wynn Las Vegas employees test positive for coronavirus

Though the Supreme Court vacancy does significantly jolt the White House race, Biden still wants to keep much of the focus on the coronavirus. He strengthened his standing through the summer by hammering what he calls the Trump administration’s failures to take the virus threat seriously and to provide consistent guidance to the public, including around the effectiveness of wearing face masks.

After revelations in a new book from journalist Bob Woodward that Trump intentionally played down the seriousness of the virus earlier this year, Biden said of a president’s responsibilities: “You’ve got to level with the American people — shoot from the shoulder,” adding, “There’s not been a time they’ve not been able to step up.”

Trump has insisted he wasn’t downplaying the severity of virus when he compared it with the seasonal flu and undercut public health officials who pushed for more stringent mitigation efforts. Yet he’s repeatedly flouted his own administration’s safety guidelines, rarely wearing a mask himself and holding large campaign events with little evidence of social distancing among his crowds.

With the death toll continuing to climb, Trump has also repeatedly passed up opportunities to serve as a unifying force for communities and families grieving the loss of loved ones. Instead, he’s effectively discounted the deaths of Americans who live in Democratic-leaning states, suggesting he has little responsibility for the well-being of those who don’t support him politically.

“If you take the blue states out, we’re at a level that I don’t think anybody in the world would be at,” Trump said this past week about the death toll. “Some of the states, they were blue states and blue state-managed.”

It was a jarring statement from an American president, yet one in keeping with Trump’s handling of the pandemic and his presidency. He’s long taken a transactional approach to his office, and he spent the opening weeks of the pandemic feuding with Democratic governors in hard-hit states, challenging them to lift restrictions that he deemed harmful to the strong economy he’d hoped to ride to a second term.

“He sees everything, including the implications of this terrible virus, in terms of his own political and personal success — ‘How does it affect me and my electability and my popularity,’” said Margaret Susan Thompson, a professor of history and political science at Syracuse University.

The question looming over his presidency now, as Americans mourn 200,000 lives lost, is what the effects of his handling of the pandemic will be on his political future. The answer will come soon enough from his fellow Americans.

Have you subscribed to theGrio’s podcast “Dear Culture”? Download our newest episodes now!

The post U.S. to hit 200K dead; Trump sees no need for regret appeared first on TheGrio.



from TheGrio https://ift.tt/3hMWxAI
via Gabe's Musing's

Foot Locker stores will turn into voter registration sites

The shoe store is partnering with the nonprofit voter outreach group Rock the Vote to install registration booths at namesake and affiliated stores

Foot Locker is turning all of its U.S. stores into temporary voter registration sites, the company announced on Friday.

In encouraging its core Gen Z consumer base to vote on Election Day, the athletic footwear and apparel giant is partnering with the nonprofit voter outreach group Rock the Vote to install voter registration hubs at its 2,000-plus namesake stores along with its Kids Foot Locker, Lady Foot Locker, Champs, Sports and Footaction locations.

Starting September 22, visitors to any U.S stores operated by Foot Locker will have ‘one-click’ access to a digital hub where they can check their voter registration status, register to vote and sign-up for election reminders.

Foot Locker is partnering with the non-partisan Rock the Vote to install voter registration hubs at its 2,000-plus namesake stores along with its Kids Foot Locker, Lady Foot Locker, Champs, Sports and Footaction locations ahead of the November 2020 election. (Photo by Alexander Tamargo/Getty Images)

Read More: Michelle Obama and DJ D-Nice host ‘Couch Party’ voter registration

Prior to the coronavirus pandemic, volunteers would typically show up at college campuses, concerts, and festivals, but now those activities have been suspended, and civic groups have been forced to figure out new and creative ways to reach young voters.

“In a year marked with such uncertainty, amid a pandemic and social unrest, our country’s future — and our collective role in shaping it — has never been more important,” Richard Johnson, CEO of Foot Locker, said in a statement. “At Foot Locker, our mission is to inspire and empower youth culture, so partnering with Rock the Vote was a natural fit to help educate and amplify the voices of today’s youth.”

(Photo by Joe Raedle/Getty Images)

The company said Foot Locker’s Instagram account is followed by 4.3 million people between the ages of 18 and 24, which is the prime demographic Rock the Vote is eager to attract. According to Rock the Vote, more than 4 million young people will become eligible to vote for the first time in the upcoming November election.

Read More: Teen who went viral for Popeyes voter registration idea now has bigger plans

Foot Locker also pledged to make it easy for their employees — many of them young — to vote by giving them a flexible work schedule that will allow them time to vote.

The arenas for the NBA’s Houston Rockets, the Toyota Center, and NFL’s Kansas City Chiefs, Arrowhead Stadium, are venues that are not typically used as polling places or for voter registration, yet they are opening their doors to help increase voter turnout on Election Day.

Have you subscribed to theGrio’s podcast “Dear Culture”? Download our newest episodes now!

The post Foot Locker stores will turn into voter registration sites appeared first on TheGrio.



from TheGrio https://ift.tt/3mFJ15N
via Gabe's Musing's

Maryland congressional candidate Kim Klacik accuses ‘The View’s’ Joy Behar of wearing blackface

Behar defends herself saying the ‘Black community had my back,’ calling it an ‘homage’

Things got tense on the latest episode of ABC’s “The View” when a Black Republican candidate for the U.S. House accused one a co-host of wearing blackface.

Kim Klacik, who is running to represent Maryland’s 7th Congressional District that includes part of Baltimore, got into a heated exchange with the group of women hosts after she lodged the claim against Joy Behar. The late Elijah Cummings represented the district from 1996 to 2019.

Appearing on the daytime talk show Friday, Klacik showed support for President Donald Trump. When the subject of the White House’s reaction to the coronavirus pandemic came up, Behar pressed Klacik to admit that the Trump administration’s response to COVID-19 has not be good, particularly in light of his taped exchange with veteran journalist Bob Woodward in which Trump downplayed the seriousness of the virus to the public.

Kim Klacik (left) and Joy Behar (right) appear on screen in a Friday, Sept. 18 episode on ABC’s “The View.”

“You have to put some blame on your president, I’m sorry,” Behar said to Klacik.

In response, Klacik then said, “Is this Joy speaking? The same Joy that paraded around in blackface not too long ago?”

“That’s not true,” Behar replied. “The Black community had my back. They know that that was not blackface. That was an homage. Oh, please.”

READ MORE: ‘Daquan’ Instagram account owner defends against ‘Blackface’ accusation

Klacik was making mention of a 2019 viral moment in which a photo of a 29-year-old Behar was dressed up seemingly wearing blackface, wearing what she called a “beautiful African woman” costume for Halloween. Behar discussed the costume during a 2016 episode of “The View,” saying that she wore makeup “that was a little bit darker than my skin.”

After Klacik stated that she, too, has the support of the Black community, “View” co-host Sunny Hostin came to Behar’s defense.

“The Black community did not vote for you. The Black community did not vote for you,” Hostin said. “What planet are you living on?”

Hostin was referring to Klacik’s loss to Democrat Kweisi Mfume during an April special election to serve out what remained of Cummings’ last term after his death last October. Mfume defeated Klacik with nearly 74% of the vote, according to Ballotpedia.

Klacik and Mfume will face off again for the seat in the November general election.

READ MORE: Maryland congressional candidate Kim Klacik slams Biden at RNC

“It was during a special election while we were still under lockdown and I could not talk to people,” Klacik responded to Hostin, as all three women began to speak over one another.

“Listen, Kim, good luck to you,” Behar cut through as the show broke for commercial.

Later that day, Klacik posted a clip of the exchange on Twitter, writing that “The View” hosts cut her off because they did not agree with her.

“Why are they silencing Black Women?” Klacik wrote before making one more dig at Behar. “I think your White Privilege is showing through your blackface!”

Have you subscribed to theGrio’s podcast “Dear Culture”? Download our newest episodes now!

The post Maryland congressional candidate Kim Klacik accuses ‘The View’s’ Joy Behar of wearing blackface appeared first on TheGrio.



from TheGrio https://ift.tt/3mF4ueU
via Gabe's Musing's

Minneapolis street to be named for George Floyd

Chicago Avenue and East 38th Street in Minneapolis has become hallowed ground

Chicago Avenue and East 38th Street, the intersection where George Floyd was killed by police in Minneapolis earlier this year, has become hallowed ground.

Protesters and well-wishers have adorned it with flowers and notes since the father of five died after being detained there, where one officer put a knee on his neck for nearly nine minutes.

Protesters gather at a memorial for George Floyd, June 1, 2020, in front of Cup Foods in Minneapolis. Floyd was killed May 25 while in police custody outside the store. (AP Photo/John Minchillo, File)

That street corner will now bare Floyd’s name. The Minneapolis City Council on Friday voted to rename Chicago Avenue between East 37th Street and East 39th Street as “George Perry Floyd Jr. Place,” CNN reports.

Robin Hutcheson, Minneapolis Public Works Department director, said the commemorative street sign will be placed right at the fateful intersection, and it won’t confuse pedestrians.

READ MORE: NOLA’s Jeff Davis Parkway renamed for Black educator Norman C. Francis

“The commemorative name addition will not affect addressing on the street. The signage to indicate the commemorative street naming will be placed at the intersection of 38th St E and Chicago Ave only,” Hutcheson stated.

George Floyd's Brother Attends Unveiling Of Memorial Portrait In Brooklyn
George Floyd was killed this past spring while in the custody of police. (Credit: Getty Images)

While Floyd’s name is being honored in Minneapolis, his death inspired several re-designated street names as a show of solidarity to him, Breonna Taylor and others unarmed Black Americans who were killed at the hands of law enforcement.

READ MORE: Court weighs allowing courtroom cameras in George Floyd case

As previously reported by theGrio, Washington D.C. Mayor Muriel Bowser renamed a corner of 16th Street at Lafayette Park adjacent to the White to Black Lives Matter Plaza. A large mural of Black Lives Matter was also painted on 16th Street leading to the park, which became an epicenter of protest in the nation’s capital.

Black Lives Matter street murals began popping up all over the country. Forbes reported that the murals have been painted on streets in cities like San Francisco, Austin, Texas and New York City, where murals have popped up in the Manhattan, Queens and Brooklyn boroughs.

Have you subscribed to theGrio’s podcast “Dear Culture”? Download our newest episodes now!

The post Minneapolis street to be named for George Floyd appeared first on TheGrio.



from TheGrio https://ift.tt/2ElVWsc
via Gabe's Musing's

Georgia’s Fulton County works to avoid another vote debacle after primary snafu

Voting problems in and around Atlanta in recent elections have become a national flash point

ATLANTA (AP) — Twice delayed because of the coronavirus pandemic, Georgia’s primary election earlier this year was marred by dysfunction: Hourslong wait times at polling places. Absentee ballots that never arrived. Votes cast after midnight.

The problems were most acute in Fulton County, which includes most of Atlanta and is a Democratic stronghold in a traditionally red state. State leaders launched investigations while election officials in the most populous county said they did the best they could in unprecedented circumstances.

Now, election officials say they’re making changes to avoid a repeat in November, as Georgia emerges as a potential presidential battleground, turnout is expected to set records and the coronavirus continues to rage.

With nearly 790,000 active voters, Fulton County accounts for about 11% of the state’s electorate. Voting problems in and around Atlanta in recent elections have become a national flash point because they disproportionately affect Black residents, who comprise just over half the city’s population.

In this June 9, 2020 file photo, people wait to vote in the Georgia’s primary election at Park Tavern in Atlanta. (AP Photo/Brynn Anderson)

The day after the primary, the front page of The Atlanta Journal-Constitution blared, “COMPLETE MELTDOWN” across a photograph of voters, many wearing masks, in a long line outside an Atlanta polling place.

Secretary of State Brad Raffensperger, a Republican, said the election went well overall but promised investigations into the election’s handling in Fulton and neighboring DeKalb County. The Republican speaker of the Georgia House said Fulton was particularly troubling as he called for an investigation of the primary process.

READ MORE: In battlegrounds, absentee ballot rejections could triple

Voting rights activists and academics noted that predominantly Black communities saw some of the longest lines, which they said is especially worrisome given the history of Georgia and other Southern states suppressing Black votes.

“I’m not necessarily accusing folks of intentionally trying to disenfranchise Black voters. But if the outcome is that Blacks are bearing the disproportionate brunt of the decisions, then it is racial and it has to be adjusted,” Emory University political science professor Andra Gillespie said.

LaTosha Brown, co-founder of Black Voters Matter Fund, said she waited with voters who finally cast their ballots at 12:37 a.m. the next day.

“I think it is a combination of the failure of leadership, systemic and structural racism, and voter suppression that is alive and rampant in this state,” she said, adding that long lines and other problems can lead to voter apathy.

The night of the primary, Fulton County elections director Rick Barron spoke frankly with reporters about challenges his staff faced.

He said the pandemic was the root of many of the problems. It caused poll workers to drop out, complicated poll worker training on a new election system and led to a significant number of polling places having to be changed or consolidated.

To limit potential exposure to the virus, the secretary of state encouraged people to vote by mail and sent absentee ballot applications to active voters.

But then the head of Fulton County’s absentee ballot section tested positive for COVID-19 in early April and another staffer died from the disease, causing the office to close for several days just as absentee ballot applications began to pour in. Technical glitches slowed the processing of applications received by email. The county struggled to catch up, but some voters never received requested ballots and ended up voting in person.

In this a June 9, 2020, file photo, voters wait in line to cast their ballots in the state’s primary election at a polling place in Atlanta, Ga. (AP Photo/Ron Harris, File)

report released last week by a legislative panel found that most of the problems stemmed from the coronavirus, first-time statewide use of new voting equipment and the increase in absentee voting. Investigations by the secretary of state’s office found that Fulton County failed to process some absentee ballot applications and that poll workers were inadequately trained, among other problems.

Amanda Clark Palmer, an attorney for the county, acknowledged the problems but said the county’s election officials and workers demonstrated “heroic” efforts.

“They do not deserve to be vilified, and yet that is how they feel right now being the only county that has been called to account before this board for the June 9 election,” she told the state election board earlier this month.

To be sure, problems during the primary were not limited to Fulton County. Judges ordered polling sites in 20 of Georgia’s 159 counties to stay open past the 7 p.m. deadline because of late openings or other issues.

READ MORE: Kamala Harris introduces bill for ‘safe’ voting during coronavirus

But Fulton seemed less able to handle problems than other counties, said Chris Harvey, the secretary of state’s elections director.

“We weren’t trying to vilify anybody, but we also aren’t going to spare people’s feelings because the election in November is just too important,” he said.

Harvey is predicting record turnout this fall, with a projected 1.5 million absentee voters, 2 million early in-person voters and 2 million to 3 million in-person voters on Election Day.

Barron, Fulton County’s elections director, has vowed to learn from problems during the primary and improve.

Because of the coronavirus, the county had just eight early voting locations during the primary. It plans to have 30 during the entire three weeks of early voting before the general election, as well as two mobile voting precincts that will move around the county.

Unlike Election Day, when voters must use assigned polling places, Fulton County voters can cast ballots at any early voting location, including State Farm Arena, home of the Atlanta Hawks. It was the first NBA arena to be approved as a voting site, an effort supported by Los Angeles Lakers star LeBron James and his organization More Than a Vote, which aims to boost Black turnout.

About 100 people will staff three call centers to answer questions from voters or poll workers, who sometimes had trouble reaching the county during the primary. Every polling place will have a technician to troubleshoot equipment problems.

The county plans to hire about 2,900 poll workers, including hundreds who will be on standby in case some back out.

An important change is a big increase in places to vote or drop off a ballot. Fulton County will add 91 polling locations, bringing the total from 164 in June to 255 in November. The number of absentee ballot drop boxes will double to about 40, so that roughly 93% of county residents will live within 3 miles of one.

A new online portal to request absentee ballots set up by the secretary of state is making that process more efficient. It’s just one step state election officials are taking to help make the November election run smoothly in Fulton County and elsewhere, Harvey said.

“It’s about making sure that on Nov. 4 the only thing people are talking about are the results,” he said.

Have you subscribed to theGrio’s podcast “Dear Culture”? Download our newest episodes now!

The post Georgia’s Fulton County works to avoid another vote debacle after primary snafu appeared first on TheGrio.



from TheGrio https://ift.tt/2ZTrJYO
via Gabe's Musing's

Democrats mull tactics to halt Trump Supreme Court nominee

Senate Democrats want to block the chamber from holding a vote to replace the late Justice Ruth Bader Ginsburg

Tension is rising between lawmakers in Washington, D.C., in the wake of Supreme Court Justice Ruth Bader Ginsburg’s death on Friday.

Democratic senators intend to do whatever they can to block the chamber from holding a vote to replace Ginsburg on the bench, should President Donald Trump put forth a nominee ahead of the November election, CNN reports.

Senate Democrats, who are in the minority, on Saturday considered tactics to keep the White House and Senate Majority Leader Mitch McConnell from putting up a new justice prior to Nov. 3. With a 53-seat majority, Senate Republicans have vowed to push through a pick as soon as possible, despite McConnell’s move to block a vote on then-President Barack Obama‘s pick in Judge Merrick Garland to fill the Supreme Court seat left open by the death of Justice Antonin Scalia in February 2016, months prior to the November election that year.

United States Capitol (Photo by Win McNamee/Getty Images)

As a result, several methods to halt it are being considered.

READ MORE: Hillary Clinton reflects on Ginsburg, warns of GOP’s attempt to ‘enact the greatest travesty’

One of those methods would be to drag out the nomination and confirmation process by bringing the chamber to a stop, objecting routine business for the day. Although McConnell could garner the support of at least 51 senators to vote against that, some Republicans have come out against holding a vote on a new appointee before Nov. 3.

Alaska Sen. Lisa Murkowski, a Republican, stated that she does not want a Justice chosen until the election. She noted McConnell’s move to block Obama’s nomination of Garland as a reason behind her decision not to support a vote before the election, calling it “a double standard.”

Justice Ruth Bader Ginsburg Speaks At Georgetown Law
U.S. Supreme Court Justice Ruth Bader Ginsburg participates in a discussion at the Georgetown University Law Center on February 10, 2020 in Washington, DC. (Photo by Sarah Silbiger/Getty Images)

READ MORE: Ruth Bader Ginsburg’s dying wish: ‘I will not be replaced until a new president is installed’

Senate Democrats are also considering pushing legislation to increase the number of seats on the Supreme Court, should Trump make a third appointment to the Court.

The Guardian reports that Trump stated that he will announce a nominee for Ginsburg’s seat next week, and he intends to select another woman to fill it.

The fastest amount of days a confirmation took for a Justice was Ginsburg, with 50 days.

Election Days is 44 days away.

Have you subscribed to theGrio’s podcast “Dear Culture”? Download our newest episodes now!

The post Democrats mull tactics to halt Trump Supreme Court nominee appeared first on TheGrio.



from TheGrio https://ift.tt/3mwGsmi
via Gabe's Musing's