Search

Federated Learning is a Deep Learning Technology with poker chip Mission Potential.

Federated Learning (FL) is one more Instrument of Corporate Monopoly adapted to the concept of Public movement towards data Decentralization.


Originally published by Technology Hits on Medium


Photo by Clark Cruz from Pexels

Without question, Data is a modern commodity of our time. It is by far the most precious article of trade. The data resources and the means of access are growing exponentially.

It is estimated that the vast percentage of all the world’s data, i.e., 2.5 quintillion bytes per day, has been generated only within the past two years alone, and its value is rapidly soaring.

The inception of the data age goes beyond another information technology revolution. Data has become the driver of the economy. What makes the data-driven economy different is that trade is faster, more efficient, and widespread. Furthermore, the nature of business itself is also shifting. Between 2008 and 2012, the annual growth rate of cross-border data deterred common goods and services. Hence in the 21st-century, Data are increasingly traded commodities.

The emergence of this lucrative market has become one of the fastest-growing in the world. Until not long ago, the resource in the topic was tapping into the oil wealth. Or maybe for Gold all through the gold rush epoch. Today the Data is the oil and Gold of the digital age. Giant corporations such as Google, Amazon, Apple, Facebook, and Microsoft have become unstoppable data wealth heirs. Their profits are surging with collectively racking up more than $25bn in net quarterly profit. The companies have pivoted their tactical missions into a strategic voyage to maximize profits and power by capturing information’s value. This is based on that intention, so they adopt modern technologies such as Artificial Intelligence (AI) and Deep Learning.

Artificial Intelligence, Machine Learning as an instrument of Data Mining and Information Commoditization

Data mining is a tool that utilizes Artificial Intelligence (AI) to excavate helpful information within a database that can be cast-off to improve actions. It also opens up avenues to automatically examine, visualize and uncover patterns in data. Both data mining and machine learning tumble under the sponsorships of Data Science. It makes sense since both technologies use data; thus, it isn’t so shocking, considering that machine learning is also used to conduct worthwhile data mining.

It is estimated that the vast percentage of all the world’s data, i.e., 2.5 quintillion bytes per day, has been generated only within the past two years alone, and its value is rapidly soaring.

The inception of the data age goes beyond another information technology revolution. Data has become the driver of the economy. What makes the data-driven economy different is that trade is faster, more efficient, and widespread. Furthermore, the nature of business itself is also shifting. Between 2008 and 2012, the annual growth rate of cross-border data deterred common goods and services. Hence in the 21st-century, Data are increasingly traded commodities.

The emergence of this lucrative market has become one of the fastest-growing in the world. Until not long ago, the resource in the topic was tapping into the oil wealth. Or maybe for Gold all through the gold rush epoch. Today the Data is the oil and Gold of the digital age. Giant corporations such as Google, Amazon, Apple, Facebook, and Microsoft have become unstoppable data wealth heirs. Their profits are surging with collectively racking up more than $25bn in net quarterly profit. The companies have pivoted their tactical missions into a strategic voyage to maximize profits and power by capturing information’s value. This is based on that intention, so they adopt modern technologies such as Artificial Intelligence (AI) and Deep Learning.

Artificial Intelligence, Machine Learning as an instrument of Data Mining and Information Commoditization

Data mining is a tool that utilizes Artificial Intelligence (AI) to excavate helpful information within a database that can be cast-off to improve actions. It also opens up avenues to automatically examine, visualize and uncover patterns in data. Both data mining and machine learning tumble under the sponsorships of Data Science. It makes sense since both technologies use data; thus, it isn’t so shocking, considering that machine learning is also used to conduct worthwhile data mining.

Data collected through Data Mining can teach machines, so both concepts go hand in hand as complementing each other. Furthermore, machine learning / Artificial intelligence and data mining methods compel the same fundamental algorithms to uncover data structures. However, their intended outcomes ultimately may be varied.

The challenges of the Data Mining process and prevailing pipeline Solutions to overcome such Obstacles

Unique AI techniques and algorithms in machine learning, data mining, data science are currently thought to overcome some of the barricades’ access and process data. That includes private information in cyberspace. Some of the said techniques include data cleaning, clustering, classification, feature selection and extraction, neural networks and deep learning, representation learning, knowledge discovery, anomaly detection, fault detection, transfer learning, active learning, and solutions improving the state-of-the-art concerning challenges of every industry.

The data analytics realm demands “data scientists’ skills. That is why new analytical tools are being indoctrinated in Business Analytics. The traditional enterprise data miner is customary employs teaching technical aspects of business-centric data analytics. Today, more innovative ways are being thought of to guide newcomers to data science and motivate them to easily navigate data analytics and develop skills around emerging areas of data commoditization without delving too much into extraneous technical details.

Means by which the big Industries are aiming to Commoditize Personal Information

Publicly available data services are one of the vastest resources to monetize on data. Until not too long ago, the internet was a novel evolution, and the average person would not be able to develop a web page or set up an online account. However, today, even those with no graphic design skill can create a free template and even choose and engage a hosting provider. The future data services will imitate such progression data processes that are currently only available to experts making.

Yet still, they will be in need of public information!

Data will be exploited as a business requirement in the coming decades to become such a commodity that it’s a bare minimum requirement for businesses to be competitive. For the same reason, data departments and the Chief Data Officer are becoming crucial for major industries. Data is becoming a focal point of entire departments within organizations, operating autonomously from technology and marketing subdivisions.

The sector of data categories is also evolving with the advancement of data. That includes significant data subdivisions into different sub-types. Accessible predictive analytics thus has become a branch of analytics that exists as more of a wish-list item than a viable or reliable body. It exists and is commonly employed by sundry big industries. Still, it relies on a combination of state-of-the-art artificial intelligence and human data analysts to fine-tune its disposal.

Although what is described is available to the public, “Predictive analytics” isn’t open to the general public or small business owners with limited budgets and resources. The latter uses mathematical modeling schemes to generate forecasts about various unknown circumstances, characteristics, or events.

Combining predictive analytics in conjunction with Real-time data analytics (RTDA) will further empower large industries to commoditize the data poised via data mining.

Real-time data analytics analyzes the collected Data immediately after becoming accessible. Through RTDA, they gain cleverness within a brief period. Meaning it allows companies to react without pause while seizing timeliness or preventing problems before they occur. By comparison, batch-style analytics may take hours or even days to yield results.

Federated Learning as the Deep Learning with an alternate Mission

Deep learning and deep neural learning are, indeed, brilliant technological rivals to humans. It has caused significant backlash from the communities concerning ethics and individual privacy. That is why there has also been growing public interest among communities worldwide to decentralize data storage, making it harder for big corporations to mine and commoditize general information without a reward to the original data proprietor. But as one would expect, the public’s trend does not come without a response from the siloed data hoarders in centralized databases!

To overcome future obstacles, companies like Google, Apple, Facebook, Amazon, Microsoft are foreseeing new solutions that will keep the doors open to continue data mining. That includes implementing the “Federated learning” system and particular hardware like Amazon’s Inferential AI Chip.

Federated Learning, a distributed machine learning tactic, enables model training on a large corpus of decentralized data. It is collaborative machine learning without a centralized database as standard machine learning approaches require centralizing the training data. Instead, the Federated Learning (FL) models employ user interaction with mobile devices to teach ML solutions.

Within Federated Learning, mobile phones collaboratively learn a shared prediction model while keeping all the training data on the device, dissociating the ability to enhance machine learning from storing the data in the cloud.

In the Federated Learning device downloads, the current model improves it by learning from data on the individual smartphone (or, as in Blockchain described, Nodes) and then summarizes the changes as a minor, focused update. The update to the model is sent to the cloud using encrypted communiqué, where it is instantly averaged with the rest of the participant updates to improve the shared model. All the training data remains on the mobile device. Moreover, no individual updates are stored in the cloud.

In short, Federated Learning incorporates the principles of focused data collection and minimization. It can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science draws near.

Tyranny using Federated Learning

Although federated learning seems to be a promising tool with enormous potential to function in a decentralized world, it also opens up the gate for the foe to “bane” the algorithmic model.

Our lives amid expanding cybertechnology riddled with data science are filled with a mass of algorithms, most working to accommodate our interest but few designed to take advantage of our autonomy. Indeed, our biased realm’s vindictive players can corrupt the model through their device or take over the instruments of other parties involved in training the algorithmic model. It’s even possible for adversaries to control every user’s Data they have taken over, alter how it is trained locally, change the hyperparameters such as the learning rate, and weigh the individual model before submitting the server for aggregation. They can also alter each participant’s approach to local training from one round to another. With such potential, hackers can create crack holes to modify algorithms toward their intentions.

One of the biggest obstacles is federated learning, and the Secure Aggregation protocol claim to keep user data private.

When “implemented properly,” Federated Learning makes it impossible for the server to spot anomalies in an individual user’s rundowns. However, that is not the case, given the current status of the corporate monopoly. The secure aggregation protocol employed by FL merely allows access to user summaries once they have been added together with other user data. So, since resumes can’t be screened individually, it is undoable to see inconsistencies that may lie in them, allowing toxic model attacks the perfect way to creep in the personal boundaries.

Federated Learning put in Perspective.

Federated Learning is intended to solve the most limiting obstacle of the data mining scheme: data availability. It promotes data mining, one of its biggest obstructors, the decentralized database. One further consideration to reflect in federated learning relative to Traditional machine learning is the overwhelming requisite for more local device power and memory to train the model. Another technical issue involves bandwidth. Federated learning is conducted over Wi-Fi, 4G, or 5G, while traditional machine learning occurs in data centers. The bandwidth rates of Wi-Fi or 5G are magnitudes lower than those used between the working nodes and servers in these centers. So, if algorithm training is conducted while a device is in use, it reduces its performance. Google has addressed that problem by only training devices when idling, switched on, and plugged into an outlet. An additional challenge is that instruments drop out during the training process. For instance, no one wants an awkward medical diagnosis seeped into the public. The wealth of data in these repositories is tremendously helpful for scams like identity theft and insurance con.

Keeping Federated Learning Algorithm Private, or agent of Monopoly

Erudite algorithmic models are priceless commodities, which makes them a bull’s eye for raids.

Companies control the secret allotment of multi-party computation, allowing the organizations to “conceal” the model increment by allotting portions of it at a time across devices. That way, none of the secret-holding parties can know the entire model.

With its prevailing obscure arrangement, Federated Learning opens an innovative door to monopoly and legal extortion under the notion of preserving intellectual property and security.

Federated Learning is state-of-the-art technology. So far, just like other technologies, it has to prove the legitimacy of algorithms in their industry leaders’ hands. It is another tool in the hands of corporations. Until their algorithms are transparent, we can never be confident that federated learning partakes any different strategy as its predecessor technologies, Machine learning, Deep Learning, and Artificial intelligence. That is purely aiming to monetize individual private information and use the poised data to increase herding power over the mainstream.

Data collected through Data Mining can teach machines, so both concepts go hand in hand as complementing each other. Furthermore, machine learning / Artificial intelligence and data mining methods compel the same fundamental algorithms to uncover data structures. However, their intended outcomes ultimately may be varied.

The challenges of the Data Mining process and prevailing pipeline Solutions to overcome such Obstacles

Unique AI techniques and algorithms in machine learning, data mining, data science are currently thought to overcome some of the barricades’ access and process data. That includes private information in cyberspace. Some of the said techniques include data cleaning, clustering, classification, feature selection and extraction, neural networks and deep learning, representation learning, knowledge discovery, anomaly detection, fault detection, transfer learning, active learning, and solutions improving the state-of-the-art concerning challenges of every industry.

The data analytics realm demands “data scientists’ skills. That is why new analytical tools are being indoctrinated in Business Analytics. The traditional enterprise data miner is customary employs teaching technical aspects of business-centric data analytics. Today, more innovative ways are being thought of to guide newcomers to data science and motivate them to easily navigate data analytics and develop skills around emerging areas of data commoditization without delving too much into extraneous technical details.

Means by which the big Industries are aiming to Commoditize Personal Information

Publicly available data services are one of the vastest resources to monetize on data. Until not too long ago, the internet was a novel evolution, and the average person would not be able to develop a web page or set up an online account. However, today, even those with no graphic design skill can create a free template and even choose and engage a hosting provider. The future data services will imitate such progression data processes that are currently only available to experts making.

Yet still, they will be in need of public information!

Data will be exploited as a business requirement in the coming decades to become such a commodity that it’s a bare minimum requirement for businesses to be competitive. For the same reason, data departments and the Chief Data Officer are becoming crucial for major industries. Data is becoming a focal point of entire departments within organizations, operating autonomously from technology and marketing subdivisions.

The sector of data categories is also evolving with the advancement of data. That includes significant data subdivisions into different sub-types. Accessible predictive analytics thus has become a branch of analytics that exists as more of a wish-list item than a viable or reliable body. It exists and is commonly employed by sundry big industries. Still, it relies on a combination of state-of-the-art artificial intelligence and human data analysts to fine-tune its disposal.

Although what is described is available to the public, “Predictive analytics” isn’t open to the general public or small business owners with limited budgets and resources. The latter uses mathematical modeling schemes to generate forecasts about various unknown circumstances, characteristics, or events.

Combining predictive analytics in conjunction with Real-time data analytics (RTDA) will further empower large industries to commoditize the data poised via data mining.

Real-time data analytics analyzes the collected Data immediately after becoming accessible. Through RTDA, they gain cleverness within a brief period. Meaning it allows companies to react without pause while seizing timeliness or preventing problems before they occur. By comparison, batch-style analytics may take hours or even days to yield results.

Federated Learning as the Deep Learning with an alternate Mission

Deep learning and deep neural learning are, indeed, brilliant technological rivals to humans. It has caused significant backlash from the communities concerning ethics and individual privacy. That is why there has also been growing public interest among communities worldwide to decentralize data storage, making it harder for big corporations to mine and commoditize general information without a reward to the original data proprietor. But as one would expect, the public’s trend does not come without a response from the siloed data hoarders in centralized databases!

To overcome future obstacles, companies like Google, Apple, Facebook, Amazon, Microsoft are foreseeing new solutions that will keep the doors open to continue data mining. That includes implementing the “Federated learning” system and particular hardware like Amazon’s Inferential AI Chip.

Federated Learning, a distributed machine learning tactic, enables model training on a large corpus of decentralized data. It is collaborative machine learning without a centralized database as standard machine learning approaches require centralizing the training data. Instead, the Federated Learning (FL) models employ user interaction with mobile devices to teach ML solutions.

Within Federated Learning, mobile phones collaboratively learn a shared prediction model while keeping all the training data on the device, dissociating the ability to enhance machine learning from storing the data in the cloud.

In the Federated Learning device downloads, the current model improves it by learning from data on the individual smartphone (or, as in Blockchain described, Nodes) and then summarizes the changes as a minor, focused update. The update to the model is sent to the cloud using encrypted communiqué, where it is instantly averaged with the rest of the participant updates to improve the shared model. All the training data remains on the mobile device. Moreover, no individual updates are stored in the cloud.

In short, Federated Learning incorporates the principles of focused data collection and minimization. It can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science draws near.

Tyranny using Federated Learning

Although federated learning seems to be a promising tool with enormous potential to function in a decentralized world, it also opens up the gate for the foe to “bane” the algorithmic model.

Our lives amid expanding cybertechnology riddled with data science are filled with a mass of algorithms, most working to accommodate our interest but few designed to take advantage of our autonomy. Indeed, our biased realm’s vindictive players can corrupt the model through their device or take over the instruments of other parties involved in training the algorithmic model. It’s even possible for adversaries to control every user’s Data they have taken over, alter how it is trained locally, change the hyperparameters such as the learning rate, and weigh the individual model before submitting the server for aggregation. They can also alter each participant’s approach to local training from one round to another. With such potential, hackers can create crack holes to modify algorithms toward their intentions.

One of the biggest obstacles is federated learning, and the Secure Aggregation protocol claim to keep user data private.

When “implemented properly,” Federated Learning makes it impossible for the server to spot anomalies in an individual user’s rundowns. However, that is not the case, given the current status of the corporate monopoly. The secure aggregation protocol employed by FL merely allows access to user summaries once they have been added together with other user data. So, since resumes can’t be screened individually, it is undoable to see inconsistencies that may lie in them, allowing toxic model attacks the perfect way to creep in the personal boundaries.

Federated Learning put in Perspective.

Federated Learning is intended to solve the most limiting obstacle of the data mining scheme: data availability. It promotes data mining, one of its biggest obstructors, the decentralized database. One further consideration to reflect in federated learning relative to Traditional machine learning is the overwhelming requisite for more local device power and memory to train the model. Another technical issue involves bandwidth. Federated learning is conducted over Wi-Fi, 4G, or 5G, while traditional machine learning occurs in data centers. The bandwidth rates of Wi-Fi or 5G are magnitudes lower than those used between the working nodes and servers in these centers. So, if algorithm training is conducted while a device is in use, it reduces its performance. Google has addressed that problem by only training devices when idling, switched on, and plugged into an outlet. An additional challenge is that instruments drop out during the training process. For instance, no one wants an awkward medical diagnosis seeped into the public. The wealth of data in these repositories is tremendously helpful for scams like identity theft and insurance con.

Keeping Federated Learning Algorithm Private, or agent of Monopoly

Erudite algorithmic models are priceless commodities, which makes them a bull’s eye for raids.

Companies control the secret allotment of multi-party computation, allowing the organizations to “conceal” the model increment by allotting portions of it at a time across devices. That way, none of the secret-holding parties can know the entire model.

With its prevailing obscure arrangement, Federated Learning opens an innovative door to monopoly and legal extortion under the notion of preserving intellectual property and security.

Federated Learning is state-of-the-art technology. So far, just like other technologies, it has to prove the legitimacy of algorithms in their industry leaders’ hands. It is another tool in the hands of corporations. Until their algorithms are transparent, we can never be confident that federated learning partakes any different strategy as its predecessor technologies, Machine learning, Deep Learning, and Artificial intelligence. That is purely aiming to monetize individual private information and use the poised data to increase herding power over the mainstream.

#federatedlearning #technology #data #bigdata #datamining #RTDA


1 view0 comments