PROCESSING POWER (Chapter 3.3 of “All AWS Data Analytics”)

The Massive, Massive Processing Power of AWS Cloud

The Massive, Massive Processing Power of AWS Cloud

3.3  PROCESSING POWER

When “big data” became “the norm”, it was so large it became difficult to process using traditional database & software techniques. It normally exceeds processing capabilities available on-premises. AWS has computational power that’s second to none.

Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity that as a whole is measured by developers as “vCPU” or “Virtual CPU” (vs. the legacy way of describing EC2 compute power of “ECU” (Elastic Compute Unit) which you’ll still see at times today.

Each EC2 compute instance is optimized with varying combinations of CPU, memory, storage and networking capacity to meet the need of any big data analytics use case.

Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target analytical workload. To read more about the differences between Amazon EC2-Classic and Amazon EC2-VPC, read this.

Massive Amounts of Processing Power - Practically Unlimited - is Available Using AWS

Massive Amounts of Processing Power – Practically Unlimited – is Available Using AWS

Amazon EC2 Instance Types

Performance is based on the Amazon EC2 instance type you choose. There are many instance types that you can read about here,

General-purpose instances (ie, M4, M3), provide a balance of compute, network and memory for many applications (with Intel E5 v3 processors, up to 64 vCPUs). M”x” instances are ideal for small and mid-size databases, data processing tasks that require additional memory, caching fleets, and for running backend servers for SAP, Microsoft SharePoint, cluster computing, and other enterprise applications.

Compute Optimized instances (ie, C3, C4), feature the highest performing processors (including custom CPUs optimized for EC2 and are recommended for graphics-optimized batch processing, distributed analytics, high performance science and engineering applications, ad serving, MMO gaming, and video encoding.

Accelerated Computing Instances (P2), are for GPU-optimized scenarios. They have high-performance NVIDIA K80 GPUs, each with 2,496 parallel processing cores and 12 GB of GPU memory, they support GPUDirect™ (peer-to-peer GPU communication). On a p2.16xlarge instance in this family, you have 16 GPUs, 64 vCPUs, 732 GB of memory & 192 GB of graphics memory! These instances are optimized for machine learning, high performance databases, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, rendering, and other server-side GPU compute workloads.

Memory Optimized instances (ie, R4, X1), for memory-intensive applications with up to 64 vCPUs, up to 2TB of RAM and SSD storage. X1 instances are recommended  for running in-memory databases like SAP HANA, big data processing engines like Apache Spark or Presto, and high performance computing (HPC) applications. X1 instances are certified by SAP to run Business Warehouse on HANA (BW), Data Mart Solutions on HANA, Business Suite on HANA (SoH), and the next-generation Business Suite S/4HANA in a production environment on the AWS cloud. R4 instances are recommended high performance databases, for high performance databases, data mining & analysis, in-memory databases, distributed web scale in-memory caches, applications performing real-time processing of unstructured big data, Hadoop/Spark clusters, and other enterprise applications.

Storage Optimized instances (ie, I3), with very fast SSD-backed instance storage optimized for high IOPS applications, perfect for massively parallel data warehousing applications, Hadoop, NoSQL databases NoSQL databases like Cassandra, MongoDB, Redis, in-memory databases such as Aerospike, scale out transactional databases, data warehousing, Elasticsearch, analytics workloads. Dense Storage instances (ie, D2) features up to 48 TB of HDD-based local storage, dense storage instances deliver high throughput, and offer the lowest price per disk throughput performance on EC2. This instance type is ideal for Massively Parallel Processing (MPP) data warehousing, MapReduce and Hadoop distributed computing, distributed file systems, network file systems, log or data-processing applications.

AWS Provides Virtually Unlimited Capacity for Massive Datasets with Blazingly Fast Processing Power

AWS Provides Virtually Unlimited Capacity for Massive Datasets with Blazingly Fast Processing Power

Most of these instances support hardware virtualization, AVX, AVX 2, Turbo, enhanced networking performance and cluster networking placement for low latency communication between instances, and run inside a Virtual Private Cloud, giving customers complete control over network architecture. Instances can also be dedicated to an individual customer to help meet regulatory and compliance requirements (such as HIPAA).

Applications that need to respond to high throughput real time streaming data, such as large scale distributed apps or IoT platforms, plus data intensive analytics applications or large scale web and mobile apps, can also run on AWS Lambda, a simple, scalable, low cost, reliable and low latency compute service, without having to provision or manage underlying compute resources.

Read the last post here.

Read the next post here.

#gottaluvAWS! #gottaluvAWSMarketplace!

Posted in Amazon EC2 On-Demand Instances, Amazon Web Services, Amazon Web Services Analytic Services, AWS Lambda, Compute Instance Types, ECU vs vCPU, Practically Unlimited Processing Power on AWS | Leave a comment

SCALING WORKLOADS (Chapter 3.2 of “All AWS Data Analytics Services”)

Cloud Scaling: Up & Out

Cloud Scaling: Up & Out

3.2  SCALING WORKLOADS

Scalability is the capability of a system, network or process to handle a growing amount of work or application traffic. The goal of being scalable is to be able to be available to your customers as demand for your application grows.

AWS provides a scalable architecture that supports growth in users, traffic or data without a drop in performance, both vertically and horizontally, and allows for distributed processing.

AWS makes fast, scalable, gigabyte-to-petabyte scale analytics affordable to anyone via their broad range of storage, compute and analytical options, guaranteed!

Manually Scaling with EC2 Instance Types

Amazon EC2 provides a broad range of instance types optimized to fit different use cases. Instance types are composed of varying combinations of CPU, memory, storage, and networking capacity allowing you to choose the appropriate mix of resources required by your application. As an example, there are “compute optimized“, “memory optimized“, “accelerated computing” (GPU-optimized), “storage optimized“, & “dense-storage optimized“. Within each family of EC2 instance types there are several instance sizes that allow you to scale your resources to the requirements of your target workload, giving the ability to scale up to a more performant instance in the family or to scale down to a less performant instance in the family without having to migrate to a new instance type. This means you can ensure that you maintain performance during spikes in demand and also scale down to save money when there is less demand.

When you resize an instance, you must select an instance type that is compatible with the configuration of the instance. If the instance type that you want is not compatible with the instance configuration you have, then you must migrate your application to a new instance with the instance type that you want.

Dynamically Scaling

You can also scale up or down dynamically using EC2 Auto Scaling. Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Auto Scaling Groups. You can specify the minimum number of instances in each Auto Scaling Group, and Auto Scaling ensures that your group never goes below this size. You can specify the maximum number of instances in each Auto Scaling Group, and Auto Scaling ensures that your group never goes above this size. If you specify the desired capacity, either when you create the group or at any time thereafter, Auto Scaling ensures that your group has this many instances. If you specify scaling policies, then Auto Scaling can launch or terminate instances as demand on your application increases or decreases automatically.

As Auto Scaling adds and removes EC2 instances, you must ensure that the traffic for your application is distributed across all of your EC2 instances. The Elastic Load Balancing service automatically routes incoming web traffic across such a dynamically changing number of EC2 instances. Your load balancer acts as a single point of contact for all incoming traffic to the instances in your Auto Scaling Group. Elastic Load Balancing can detect issues with a particular instance and automatically reroute traffic to other instances until the issues have been resolved and the original instance restored.

Auto Scaling and Elastic Load Balancing can both be triggered through the Amazon CloudWatch monitoring system. CloudWatch allows you to monitor what you’re running in Amazon’s cloud — collecting and tracking metrics, monitoring log files, setting and displaying alarms, and triggering actions like Auto Scaling and Elastic Load Balancing.

On-Premise Scaling Up

In an on-premise or data center IT environment, “scaling up” meant purchasing more and more hardware, guessing at how many servers, etc. would be needed at peak capacity. IT departments typically provisioned enough capacity to manage highest-case capacity scenarios, and these servers usually run 100% of the time. This approach to “scalability” can leave a significant amount of underutilized resources in the data centers most of the time — and inefficiency that can impact overall costs in many ways.

Elasticity

Elasticity

Elasticity

Elasticity is defined as the degree to which a system is able to adapt to workload changes by provisioning & de-provisioning resources in an autonomic manner, so that at each point in time the available resources match the current demand as closely as possible.

Scaling Out or In

Scaling out, or horizontally scaling, is accomplished on AWS through the benefit from massive economies of scale. Using cloud computing, you achieve a lower variable cost than you could get on your own. Because hundreds of thousands of customers are aggregated in the cloud, this translates to much lower pay-as-you-go prices. You can also scale out and in with “step scaling policies“.

Auto Scaling applies the aggregation type to the metric data points from all instances and compares the aggregated metric value against the upper and lower bounds defined by the step adjustments to determine which step adjustment to perform. For example, suppose that you have an alarm with a breach threshold of 50 and a scaling adjustment type of “PercentChangeInCapacity“. You also have scale out and scale in policies with the following step adjustments:

Scale out policy
Lower bound Upper bound Adjustment Metric value

0

10

0

50 <= value < 60

10

20

10

60 <= value < 70

20

null

30

70 <= value < +infinity

Scale in policy

Lower bound Upper bound Adjustment Metric value

-10

0

0

40 < value <= 50

-20

-10

-10

30 < value <= 40

null

-20

-30

-infinity < value <= 30

Your group has both a current capacity and a desired capacity of 10 instances. The group maintains its current and desired capacity while the aggregated metric value is greater than 40 and less than 60.

If the metric value gets to 60, Auto Scaling increases the desired capacity of the group by 1 instance, to 11 instances, based on the second step adjustment of the scale-out policy (add 10 percent of 10 instances). After the new instance is running and its specified warm-up time has expired, Auto Scaling increases the current capacity of the group to 11 instances. If the metric value rises to 70 even after this increase in capacity, Auto Scaling increases the desired capacity of the group by another 3 instances, to 14 instances, based on the third step adjustment of the scale-out policy (add 30 percent of 11 instances, 3.3 instances, rounded down to 3 instances).

If the metric value gets to 40, Auto Scaling decreases the desired capacity of the group by 1 instance, to 13 instances, based on the second step adjustment of the scale-in policy (remove 10 percent of 14 instances, 1.4 instances, rounded down to 1 instance). If the metric value falls to 30 even after this decrease in capacity, Auto Scaling decreases the desired capacity of the group by another 3 instances, to 10 instances, based on the third step adjustment of the scale-in policy (remove 30 percent of 13 instances, 3.9 instances, rounded down to 3 instances), etc.

Instance Warmup

With step scaling policies, you can specify the number of seconds that it takes for a newly launched instance to warm up. Until its specified warm-up time has expired, an instance is not counted toward the aggregated metrics of the Auto Scaling Group.

While scaling out, Auto Scaling does not consider instances that are warming up as part of the current capacity of the group. Therefore, multiple alarm breaches that fall in the range of the same step adjustment result in a single scaling activity. This ensures that we don’t add more instances than you need. Using the example in the previous section, suppose that the metric gets to 60, and then it gets to 62 while the new instance is still warming up. The current capacity is still 10 instances, so Auto Scaling should add 1 instance (10 percent of 10 instances), but the desired capacity of the group is already 11 instances, so Auto Scaling does not increase the desired capacity further. However, if the metric gets to 70 while the new instance is still warming up, Auto Scaling should add 3 instances (30 percent of 10 instances), but the desired capacity of the group is already 11, so Auto Scaling adds only 2 instances, for a new desired capacity of 13 instances.

While scaling in, Auto Scaling considers instances that are terminating as part of the current capacity of the group. Therefore, AWS won’t remove more instances from the Auto Scaling Group than necessary.

Note that a “scale in” activity can’t start while a “scale out” activity is in progress.

On-Premise Horizontal Scaling

Typically on-premise horizontally scaling would give lower costs by using commodity hardware and software. This solution is very operationally complex and time consuming, and requires long hours put in by your IT staff continually, making sure enough capacity is provisioned all the time, no matter what.

"Scaling Out" Guessing Done On-Prem Requires Constant Monitoring and Configuration

“Scaling Out” Guessing Done On-Prem Requires Constant Monitoring and Configuration

Scaling with AWS Lambda

Sooner than you think, servers will be obsolete – at minimum, “old school.”  AWS Lambda is part of another new buzz word: “Serverless Computing.”

AWS Lambda runs your code written in Java, Node.js, and Python without requiring you to provision or manage servers. Lambda will run and scale your code with high availability, and you pay only for the compute time you consume in increments of 100 milliseconds. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

Below are some diagrams on the architecture of 3 different Lambda processes.

  1. Real-time File Processing: You can use Amazon S3 to trigger AWS Lambda to process data immediately after an upload. For example, you can use Lambda to thumbnail images, transcode videos, index files, process logs, validate content, and aggregate and filter data in real-time:
Diagram of Lambda in Real-Time File Processing

Diagram of Lambda in Real-Time File Processing (image courtesy of AWS properties)

2. Real-time Stream Processing: You can use AWS Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, click stream analysis, data cleansing, metrics generation, log filtering, indexing, social media analysis, and IoT device data telemetry and metering:

Diagram of Lambda Real-time Stream Processing

Diagram of Lambda Real-time Stream Processing (image courtesy of AWS properties)

3. Extract, Transform, & Load: You can use AWS Lambda to perform data validation, filtering, sorting, or other transformations for every data change in a DynamoDB table and load the transformed data to another data store:

Diagram of Lambda Real-time Retail Data Warehouse ETL (image courtesy of AWS properties)

Diagram of Lambda Real-time Retail Data Warehouse ETL Processing (image courtesy of AWS properties)

Read the previous post here.

Read the next post here.

#gottaluvAWS! #gottaluvAWSMarketplace!

 

Posted in Amazon Web Services, Auto Scaling, AWS Lambda, Cloud Computing, EC2, EC2 Instance Warm Up, Elasticity, Scaling In and Out, Scaling Up or Down | Leave a comment

OVERCOMING KEY CHALLENGES IN DATA ANALYTICS (Chapter 3.1 of “All AWS Data Analytics Services”)

Stop Guessing at Capacity!

Stop Guessing at Capacity!

3.1  CAPACITY GUESSING

In the pre-cloud days, many different types of servers were located on-premises or in a data center. The servers were expensive, each software package that needed to run on each server needed very expensive licenses which had to be continually renewed or updated, and you needed highly-paid staff to provision, configure, and maintain the servers.

As an ever-increasing and ubiquitous proliferation of data is emitted from increasingly new and previously unforeseen sources, traditional in-house IT solutions are unable to keep up with the pace. Heavily investing in data centers and servers by “best guess” is a waste of time and money, and a never-ending job.

AWS eliminates over-purchasing of servers and infrastructure capacity needs. Before the cloud, when you made a capacity decision prior to deploying an application, you often over-purchased “just in case” your app becomes the next killer app. Oftentimes you ended up with expensive idle resources or even worse, dealing with limited capacity & losing customers.

On AWS, you can access as much or as little capacity as you need, and scale horizontally or vertically as required within minutes, or even automate capacity. This lowers the cost of ownership and reduces management overhead costs, freeing up your business for more strategic and business-focused tasks.

AWS Eliminates Capacity Guessing in Their Massive Secure Data Centers

AWS Eliminates Capacity Guessing in Their Massive Secure Data Centers

One benefit of being on AWS is that you trade “Capital Expense” for “Variable Expense“. Rather than having to invest heavily in data centers & servers before you know how you’re going to use them, you only pay when you consume computing resources, and only for what is actually consumed. This translates into a dramatic decrease in IT costs. It’s much smarter to focus on the projects that differentiate your business vs. the heavy lifting of racking, stacking, & powering servers.

The Benefits of Using an AWS Fully-Managed Service

The Benefits of Using an AWS Fully-Managed Service

The need for speed and agility today in analyzing data differently and efficiently requires complex architectures that are available and ready for use with the click of a button on AWS and the AWS Marketplace – eliminating the need to concern yourself with the underlying mechanisms and configurations that you’d have to do on premises.

In addition, AWS offers “AWS Trusted Advisor“, an online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment. It provides real-time guidance to help you provision resources following AWS best practices.

A Diagram of How AWS Trusted Advisor Works

A Diagram of How AWS Trusted Advisor Works

Every AWS customer gets access to four categories in Trusted Advisor:

  • Cost Optimization
  • Performance
  • Security
  • Fault Tolerance

If you have Business or Enterprise support, you then have access to the full set of Trusted Advisor categories.

Administrators can apply the Trusted Advisor suggestions at their own pace & adopt regular usage of Trusted Advisor recommendations as a significant part of an ongoing, day-to-day capacity optimization plan.

You can read the previous post here.

You can read the next post here.

#gottaluvAWS! #gottaluvAWSMarketplace!

 

 

 

Posted in #savingcostonAWS, Architecture, AWS Fully-Managed Service, AWS Marketplace, AWS Trusted Advisor, Cloud Computing, Eliminate Capacity Guessing | Leave a comment

2017: THE YEAR OF INTELLIGENCE (Chapter 2.3 of “All AWS Data Analytics Services”)

2017: The Year of Intelligence (image courtesy of vanrijmenam)

2017: The Year of Intelligence
(image courtesy of vanrijmenam)

2.3  2017: THE YEAR OF INTELLIGENCE

(I apologize in advance: I love this topic so much I got a bit carried away – for those that just want the facts, please skip over my section on The Augmented World Expo 😉 , in italics, contained between horizontal lines)

Now that the buzzword “Big Data” has finally waned, companies can finally make use of it! 2017 promises to be “The Year of Intelligence.”

2017 will see a rise in Big-Data-as-a-Self-Service solutions.

“Big-Data-as-a-Service”: Catapult Big Data Analytic Adoption
(image courtesy of vanrijmenam)

Self-service big data analytics will enable organizations to monetize their data and to use the insights to improve their business. These solutions do not require months of planning and preparation or the development of an IT infrastructure. Instead, you can simply connect your data sources and get to work. These platforms will enable agility, short time-to-implementation and offer increased productivity for small-to-medium-sized enterprises. Knowing that there are approximately 125 million SMEs in the world, it’s a massive market up for grabs. Big-Data-as-a-Self-Service Solutions, enabling organizations to prepare data irrespective of the type of data, whether structured, semi-structured or unstructured, could, therefore, be the killer app for big data adoption in 2017. Solutions found in AWS Marketplace are plentiful to help your Big-Data-as-a-Service initiatives.

The key applications companies are exploring in 2017 apart from self-service big data analytics are the Internet of Things (IoT), Machine Learning (ML), and Artificial Intelligence (AI), in combination with Augmented Reality (AR) and Virtual Reality (VR).

The tremendous success of Pokemon GO AR application has proven to businesses that AR  (and VR) are realistic revenue generators – suddenly thrust into the mainstream. This opens up the gates for workplace gamification for improved employee engagement, retention, and customer experience.


(Shameless plug: my very great friend, a creator of what’s now known as Google Earth, co-founded the first and now largest AR/VR/Emerging Tech Conference & Expo, The Augmented World Expo (AWE). I encourage all readers to go, or at least follow online. It’s a “B2B” conference where all the organizations, companies, and universities that are 7-10 years in the future of technology all gather in one place once a year. The first year I went, my mind was blown & I’ve been an advocate ever since!

Below are some photos and video I could find of AWE – and if I can find the “really cool” photos, I might update this in the future:

The above video is of an augmented reality children’s book. It’s like going into the world of J.K. Rowlings! And this was from 3 years ago!

DAQRI's 4D Human Stewardess and Me Through an iPad!

DAQRI’s 4D Human Stewardess and Me Through an iPad!

AR App That Helps Tourism See Full Ruins as If They Were Whole

AR App That Helps Tourism See Full Ruins as If They Were Whole

“Metaio Junaio”, Bought by Apple, AR Web App

DAQRI's "Skeleton to Outer Space" App with Me Sitting Beside It in First Image

DAQRI’s “Skeleton to Outer Space” App with Me Sitting Beside It in First Image

INTEL's Reaching Inside Monitor to Steal Dragon Egg & Get Attacked by Dragon Mother

INTEL’s Reaching Inside Monitor to Steal Dragon Egg & Get Attacked by Dragon Mother

architecture

How AR Helps Architects with Their Drawings: Visualize the Renderings in 3D with AR

AR Aiding the Setup or Repair of Anything from Sound Systems to Automobile Motors

AR Aiding the Setup or Repair of Anything from Sound Systems to Automobile Motors

INTEL's 3D AR T-Shirt with Dinosaur Popping Out

INTEL’s 3D AR T-Shirt with Dinosaur Popping Out

The Person Who Did All the AR for "IronMan" Gave a Demo

The Person Who Did All the AR for “IronMan” Gave a Demo

Part of What Went Into Creating the AR for the Ironman Movie

Part of What Went Into Creating the AR for the Ironman Movie

Ok, enough fun on AR 😉 )


“Mixed Reality” (MR), sometimes referred to as “hybrid reality”, is the merging of real & virtual worlds to produce new environments and visualizations where physical & digital objects co-exist and interact in real-time. Existing devices that enable this are Microsoft Hololens & the technologies created by Magic Leap. Mixed Reality offers tremendous opportunities for organizations to perform tasks better and to better understand the data generated by the organization. I’ve seen many of these types of devices at the conference mentioned above. Mixed reality is currently being used in manufacturing to enable better repairs, faster product development, or improved inventory management, for example. Mixed Reality will also help decision-makers understand very complex data sets, leading to better decision-making. Below is an image of how Mixed Reality will dramatically improve data visualizations and decision-making:

Mixed Reality Data Visualization (image by vanrijmenam)

Mixed Reality Data Visualization
(image courtesy of vanrijmenam)

IoT offers immeasurable insight into a customer’s mind. It’s also changing how daily life operates by helping create more efficient cities and leaner enterprises. With an estimated 50 billion IoT sensors by 2020 and more “things” on the internet by 2030, it’s undeniable that IoT will not only be transformative, but disruptive to business models.

“Conversational AI”: Intelligent Apps will Revolutionize Interactions
(image courtesy of vanrijmenam)

Many Fortune 500 brands are already using “chatbots”, and many more are developing them. Since chatbots are only as valuable as the relationships they build & support, their level of sophistication will make or break them. Investing in AI is one piece of that puzzle, and 2017 will be the year that companies need to expand their AI initiatives while also doubling on investing to improve them with new data streams & channel integrations.

Connected devices will become truly smart in 2017. Robots, autonomous vehicles or boats, drones and any other IoT product will become increasingly intelligent. These devices will become a lot better at understanding the user and adapting the product or service to the needs of the user. Software updates will be done over-the-air, reducing the need to constantly buy a new product.

When these smart devices are connected to intelligent applications such as Siri, Amazon Alexa, Viv, Microsoft Cortana or Google Home, the possibilities become endless. Conversational AI will enable high-level conversations with these intelligent applications. At the moment, these applications are primarily used to control your phone, play music or order a pizza but in 2017 that is about to change drastically.

Already, Amazon Alexa owners can control their car from inside their home and turn on the engine, but soon you will be able to control almost any device using your voice. Especially the development of Viv, which is coined as the next generation of Siri, will be able to do anything that you ask. As such, these bots, as said by Microsoft CEO Satya Nadella, will be the next apps. 2017 will see the convergence of these intelligent applications with many IoT devices and with Amazon announcing a new startup accelerator focused on conversational AI, it will change how your organization will have to deal with customers.

The algorithmic business has the potential to change society and 2016 saw a significant increase in the development of algorithms. Algorithms won the game of Go, it can translate languages it does not know and even detect a criminal simply by looking at an image of a face. AI will not stop there and in the coming years we will move more and more towards a form of artificial general intelligence; Siri that can also drive your car!

Artificial general intelligence is becoming possible because of deep learning. Deep learning is a subfield of machine learning and is inspired by the neural networks in our brain. The objective is to create artificial neural networks that can find patterns in vast amounts of data. Deep learning is becoming widely available now, because of the increased computing power and large data sets that are available to scientists around the globe. Therefore, in 2017 we will see many new deep learning applications that could significantly impact our lives.

Deep Learning Becomes Smarter, Bringing Us Closer to Artificial General Intelligence (image courtesy of vanrijmenam)

Deep Learning Becomes Smarter, Bringing Us Closer to Artificial General Intelligence
(image courtesy of vanrijmenam)

Deep learning algorithms are not trained by humans. Rather, they are exposed to massive data sets, millions of videos / images / articles, etc. and the algorithms must figure out for itself how to recognize different objects, sentences, images, etc. As a result, it can come up with solutions no humans could have thought of. An example is a set of algorithms that just developed an encryption algorithm humans could not decipher using patterns humans would never use. Thus, if in 2017 you have the feeling that your computer talks in a secret code to you, that could very well be true :-0

Read the previous post here.

Read the next post here.

The next post begins to focus more on AWS Data Analytic Services. This background should make you aware of why AWS Data Analytics is so important to your business!

#gottaluvAWS! #gottaluvAWSMarketplace!

Posted in 2017: The Year of Intelligence, Algorithmic IT Operations, Amazon Alexa, Amazon Artificial Intelligence, Amazon ML, Amazon Startup Accelerator: Conversational AI, Amazon Web Services, Amazon Web Services Analytic Services, Augmented Reality, AWS Analytic Services, AWS Analytics, Benefits of Analytics, Big-Data-as-a-Self-Service, Business Intelligence & Big Data, Chatbots, Cloud Computing, Deep Learning, Digital Transformation, Mixed Reality, The Augmented World Expo, Virtual Reality | Leave a comment

THE NEW BUZZWORD: “DIGITAL TRANSFORMATION” (Chapter 2.2 of “All AWS Data Analytics Services”)

“Digital Transformation”

2.2  THE “NEW” BUZZWORD: “DIGITAL TRANSFORMATION”

Wikipedia Definition of “Digital Transformation”:Digital transformation may be thought of as the third stage of embracing digital technologies: digital competence –> digital usage –> digital transformation, with usage and transformative ability informing “digital literacy”. The transformation stage means that digital usages inherently enables new types of innovation and creativity in a particular domain, rather than simply enhancing and supporting the traditional methods.

Digital Transformation really means “Digital Fungibility.” What??? While “transformation” is the act of making substantive change, “fungibility” is the intrinsic ability to be substantially changed.

A tactical approach to digital transformation centers on using new tools and related processes to get better results. Those new tools are based on new or reworked ideas, so they’re not a direct substitution for the tools you already have.

What’s key is that they make you think differently, as the Internet, cloud computing, and mobile computing have done.

Thinking differently is perhaps the most important ingredient in digital transformation, in fact. If you keep thinking the same, all that new technology will be used to do more of the same. That’s the opposite of transformation.

As our relationships to technology continues to evolve, machines are able to learn and adapt to their environments. Artificial Intelligence (AI) is able to work collaboratively with human professionals to solve intensely complex problems. AI stands to become one of the most disruptive forces in the IT world.

In the age of Digital Transformation, practically everything can be measured. Every important decision can and should be supported by the application of data and analytics. AI, Machine Learning (ML), Deep Learning technologies like Apache MXNet that runs on AWS, and Neuro-Linguistic Programming (NLP = encompasses the 3 most influential components involved in producing human experience: neurology, language, and programming) have become mainstream in the past year. The democratization of data, the self-service movement in AI/ML tools and data’s continued simplicity means more people will be leveraging it in more applications. Technology is sharpening the workforce & putting the power of data into the hands of business users. When we effectively scale ML, we can greatly increase the action-taking bandwidth of an enterprise.

Shifting data science from an ivory tower function and giving everyone in the organization access to advanced, interactive AI will help each employee become smarter and more productive. When data can inform each and every decision a business user is making, businesses will see a real competitive advantage and business outcomes.

Read the last post here.

Read the next post here.

#gottaluvAWS! #gottaluvAWSMarketplace!

 

Posted in Amazon Artificial Intelligence, Amazon Machine Learning, Amazon Web Services, Amazon Web Services Analytic Services, AWS Analytic Services, AWS Analytics, Digital Fungibility, Digital Transformation, MXNet on AWS, New Buzzword: Digital Transformation | Leave a comment

THE CURRENT STATE OF DATA & ANALYTICS (Chapter 2.1 of “All AWS Data Analytics Services”)

“The Age of Algorithms”

2.1  THE “OLD” BUZZWORD: “BIG DATA”

The term “big data” was first described in 1944 as “information explosion.” In 2001 an article published by the Meta Group, “3D Data Management: Controlling Data Volume, Velocity, and Variety“, first described what has been generally accepted as the 3 defining definitions of big data (the “3 V’s”). For you history buffs, you can read an interesting story published in Forbes entitled “A Very Short History of Big Data” here.

The sheer volume of data generated by applications and infrastructure is increasing beyond comprehension: however for the first time, teams will be embracing an algorithmic approach – known as “Algorithmic IT Operations” (AIOps) – to see what’s happening in the network in real time, diagnose the issue and then automate a fix.

For decades, companies have been making business decisions based on traditional relational enterprise data, such as transactions. Then, “big data” came into the picture. Along with “big data” came massive volumes of both structured and unstructured data that’s so large it’s difficult to process using traditional database and software techniques. In fact, there’s more unstructured data in the world today than structured. The volume is too big, it comes from many different sources in many different formats, it moves too fast, and it normally exceeds processing capabilities available on-premises. But this data, when captured, formatted, manipulated and stored pulls powerful insights – some never before imagined – through analytics.

The focus has now shifted from “advanced analytics” to “advancing analytics”, which will be brought into self-service tools. With more users advancing their analytics, AI will play a bigger role in organizations.

In 2017, “big data” will be subsumed into the topic of Artificial Intelligence (AI). Big data is an enabler of AI and not an end in itself.

The shift is an increased valuation of critical thinking in the workplace as people realize there’s not a deficit of data in the enterprise, but a deficit of insight. The question for big data is “what can I learn from it?, or “where can I make meaningful insights?” AI and machine learning (ML) will be the big players, and companies will need to ask questions that their data can answer through these 2 transformative technologies.

You can read the previous post here.

You can read the next post here.

#gottaluvAWS! #gottaluvAWSMarketplace!

Posted in Algorithmic IT Operations, Amazon Web Services, Amazon Web Services Analytic Services, AWS Analytic Services, AWS Analytics, AWS BI, Old Buzzword: Big Data | Tagged , , | Leave a comment

DATA MIXOLOGY AND THE END OF DATA SILOS (Chapter 1.4 of “All AWS Data Analytic Services”)

Data Mixology & the End of Data Silos

Data Mixology & the End of Data Silos

1.4  DATA MIXOLOGY AND THE END OF DATA SILOS

The role of the CIO has changed dramatically over the past decade. With rise to new roles like the Chief Digital Officer and the Chief Customer Officer, we are seeing a rise in the importance of digital transformation happening NOW, and the importance of it happening not just in the technology of a company, but across the entire organization. Traditional solutions are more multidimensional and technology CANNOT be used as a crutch. A focus on breaking down silos, will give innovation more room to flourish and collaboration becomes easier.

AWS removes limits to the types of database and storage technologies you can use by providing managed database services that offer enterprise performance at open source cost. This results in applications running on many different data technologies, using the right technology for each workload.

Sample 1 of the Different Data Involved in One Solution

Sample 1 of the Different Data Involved in One Solution

Sample 2 of the Different Data Involved in One Solution

Sample 2 of the Different Data Involved in One Solution

Sample 3 of the Different Data Involved in One Solution

Sample 3 of the Different Data Involved in One Solution

Sample 4 of the Different Data Involved in One Solution

Sample 4 of the Different Data Involved in One Solution

Sample 5 of the Different Data Involved in One Solution

Sample 5 of the Different Data Involved in One Solution

These new trends and technologies will be at the core of digital transformation efforts in 2017 and many will continue far beyond the next year. There is no question that digital transformation is no longer an option as the need to build an organization that can change both its technology and its culture rapidly will be core to not only surviving in the time of business disruption, but building a business model that is agile, adaptable and designed to thrive long into the future where change is the only constant.

Read the previous post here.

Read the next post here.

#gottaluvAWS! #gottaluvAWSMarketplace!

 

Posted in Amazon Web Services, Amazon Web Services Analytic Services, AWS Analytics, AWS BI, AWS Marketplace, Data Mixology, End of Data Silos | Leave a comment