How to Avoid the Biggest Security Risks of Cloud-Based Services

My Post (18).pngConsiderations for migrating to the cloud Cloud-based services bring several economic and technical advantages to a business.

They eliminate the cost of hardware that becomes increasingly obsolete each year after…

Cloud-based services bring several economic and technical advantages to a business. They eliminate the cost of hardware that becomes increasingly obsolete each year after it’s purchased. They make applications and data views accessible from any geographical location through mobile apps and web-based applications. They also solve the scalability problems that crop up as a business grows and contracts over time.

A website or customer-facing app, for example, can quickly respond to changing traffic volumes from hour to hour using a cloud-hosting service’s automated provisioning of its servers. The efficiency gained from these systems can be substantial.

Businesses can eliminate the cost of buying and maintaining server capacity that’s only used during peak hours. One problem that switching to cloud technology doesn’t solve, however, is the security problems that result from its very nature – being online. Those security problems may not go away completely, but they can be reduced to low risks that are manageable.

In this article, we’ll consider five of the biggest security risks of cloud-based services, and what you can do to avoid them.

1. Regulatory Problems

In the domain of internet security risks, regulatory sanctions have become part of the constellation of issues that can impact a business’s bottom line. It’s not a security risk by itself, but instead industry and legal frameworks that require you to handle security risks appropriately. It’s both an internal and external issue.

You and your partners need to comply with regulations like Europe’s GDPR and the American medical industry’s HIPAA regulations. When it comes to picking cloud-based services for your business, you’ll need to choose those that comply with the regulations that apply to your industry.  That might include conducting audits or commissioning independent studies.

You’ll also need to conduct risk assessments when you’re deciding to move your operations or data to the cloud. You may need to consider a private cloud server for regulatory reasons.

2. Loss of Data

The loss of data generally means the destruction of data. Ransomware attacks by hackers involve you paying them to release key assets they are holding hostage. If you don’t comply, they may delete your business’s critical data as punishment.

There are also attackers that simply want to cause damage by destroying your data assets or hamper your operations by rendering servers inoperable. Data loss can also result from hardware failures or disasters that aren’t man-made.

The primary way to mitigate these risks is to identify critical data and create backup copies that serve as a fallback if data is lost to foreseen events. The cloud is often the place we create backups because it’s an off-premises location, but if the data is stored only on the cloud, that creates a vulnerability, too. One solution is to choose a cloud service that includes backup and restoration of data with their service.

3. Data Breaches

The worst-case scenario for most businesses is a network breach in which data is stolen and sold on black markets or released to the public. Data breaches are usually the work of outside actors who find a way to gain unauthorized access to a corporate network, but they can be the result of unintentional lapses in security by employees, too. It’s important to include cloud services in your business’s overall security plan and analyze the vulnerabilities that they have.

There are different types of cloud applications that hide or expose your network to possible hacking attempts. Public cloud services are accessible to the internet, while private services exist inside your network. The security risks should be weighed when deciding which type of cloud service is best for your needs.

4. Insider Fraud

Another way that sensitive data and communications can be stolen is insider fraud. In this case, one of your employees abuses their access to your information. Sometimes they may release inside information as revenge, or they may be bribed by outsiders to steal customer data.

In the case of cloud-based services, the insider might work for the service provider rather than your business, or they may be part of your development team. The best way to control insider fraud cases is to put monitoring and strong access controls into place.

When you outsource infrastructure to a cloud service, you’ll also need to research the controls that they have in place to protect you from their own employees who might be tempted to sell your data. Encrypting data at rest and controlling who can access it are a couple more ways to discourage insider threats. – Read more

Getting to Cloud, What are the Key Factors?

My Post (17).pngLessons learned from migrating a major bank’s entire application portfolio to the cloud.

With the ongoing stampede to public cloud platforms, it is worth a clearer look at some of the factors leading to such rapid growth. Amazon, Azure, Google, and IBM and a host of other public cloud services saw continued strong growth in 2018 of 21% to $175B, extending a long run of robust revenue growth for the industry.

It is worth noting though that both more traditional SaaS and private cloud implementations also are expected to grow at near 30% rates for the next decade — essentially matching or even exceeding public cloud infrastructure growth rates over the same time period. The industry with the highest adoption rates of both private and public cloud is the financial services industry where adoption (usage) rates above 50% are common and even rates close to 100% are occurring versus median rates for all industries of 19%.

In my recent experience leading IT and Operations for Danske Bank (a large Nordic Bank), we completed a four-year infrastructure transformation program that migrated the entire application portfolio from proprietary dedicated server farms in five obsolete data centers to a modern private cloud environment in two data centers. Of course, the mainframe complex was migrated and updated as well. And we incorporate some SaaS and public cloud usage as well. The migration effort resulted in the elimination of nearly all of the infrastructure layer technical debt, reduced production incidents (by more than a 95%), and correspondingly improved resiliency, security, access management, and performance.

These are truly remarkable results that now enable Danske Bank to deliver superior service to our customers, as well as reduced infrastructure cost and improve time to market. But how do you get to private or multi-cloud successfully?

Quality over cost

First, it is critical to view cloud as a quality solution versus a cost solution. Any major infrastructure re-platforming should be primarily judged based on improved capabilities, increased quality and reduced risk. Infrastructure re-platforming, done primarily for cost rationale rarely delivers — especially considering that most corporations can achieve far better cost savings by taking the same investment and using it elsewhere (operations, digitalization, consolidation, etc). So start off the project with the right investment rationale — to improve time to market, reduce security vulnerabilities, eliminate technical debt, improve availability, etc. The project objectives are then more relevant and important to the corporation, and they require a higher level of design and execution quality to achieve. These imperatives will actually result in a more focused effort that will yield better results.

Second, the effort must be comprehensive. If you only do a portion of your server estate, and allow myriad legacy systems to remain, then you have not reduced your complexity (in fact, you may have actually increased it). This complexity, coupled with dated systems, is a major contributor to defects and issues that reduce security, availability and performance. Your architecture should incorporate a proper migration of all systems to the appropriate “flavor” of cloud platform.

Standardization matters

Today’s legacy environments are often singular, meaning nearly every server is a custom implementation, with slightly, or even widely, varying configurations and platforms, each requiring custom maintenance and expert care to function and keep current. By architecting a comprehensive set of templates, perhaps 20 or even fewer, the administration and maintenance problems complexity is reduced dramatically.

My experience is that most legacy applications can be easily ported to a suitable server template in your private cloud. The remainder can be tougher, but it is important to work through them to deliver them on the new platform. This definition of the templates, and then the initial deliveries with proper middleware and database stacks, is often the toughest part of the engineering. It should be jointly done by your infrastructure and application engineering teams and piloted at the start of your migrations. But once defined and packaged, you have now reduced the complexity of the new environment enormously.

A third critical principle we followed was to minimize exceptions. This meant we would not just move old servers to the new centers but instead to set up a modern and secure “enclave” private cloud and then migrate all the old to the new. This enabled a far more secure network and a level of data protection than could be built in or overlaid in a typical legacy environment. Further, all applications would migrate to the new server patterns. The exceptions would be sunset with clear time lines.

Last, all new applications had to be built using approved cloud design templates from the start. Of course, this requires proper sponsorship and discipline to properly execute. But the reward is then greater. With far fewer exceptions, design is standardized, maintenance and administration can be made common and automated, security gaps and patch administrations becomes a far lesser task and problem. By minimizing exceptions, you greatly increase standardization, which then increases quality and reduces effort, as well as enabling automation and speed.  – Read more

Being picky when choosing a cloud

My Post (16).pngHow do you choose between one cloud of commodity machines and another? You have to get picky, writes Peter Wayner

The cloud began as a commodity business. Oh sure, there were small differences like the size of the RAM or the way the virtual CPUs were measured but the cloud offered a seemingly endless supply of seemingly identical machines. They ran the same distros and responded the same way on the command line. And if you snapped your fingers, the cloud service providers would give you root on another.

Keeping everything the same was the easiest way to lure the developers from the safety of their air-conditioned racks in the same building. Commodity machines mean there are no surprises or glitches. All of the clouds offered the most popular operating systems where all of the bits were arranged exactly the same.

The big problem for managers is choosing. If Ubuntu 18.04 is the same everywhere, what difference does it make whether you select Google, Microsoft or Amazon hardware? If the major distributions are supported everywhere, how do you decide?

To make choosing harder — but consuming easier — the space is very competitive. The developers at the cloud companies come up with clever ideas but they’re quickly copied. Genius becomes average very quickly. Innovation begets disruption which evolves into mundane feature sets that we take for granted.

How can we choose? You cannot just flip a coin. That is not scientific — even if you put on safety goggles and wear a lab coat to do it. If the suits ever notice that you are flipping a coin, they will realise they will not need to wait for artificial intelligence to be good enough to replace purchasing managers. They can replace you right now with a monkey and a coin. The solution is to get pickier. Yes, you could probably make do with any of the commodity products from any of the major clouds — or many of the not-so-major clouds too — but who wants to go through life just getting by? Who wants to settle?

Being picky sounds petty, but it is really the beginning of innovation, the tip of the spear that starts real change. It is really being sensitive to differences that matter and taking them into account.

To help this process, here are 10 different picky reasons to choose one of the major clouds. The reasons are not unequivocal because it is usually possible to accomplish much of the same thing using one of the competitors. But just because it is possible does not mean you should do it.

APIs

The clouds all offer a number of clever and sophisticated APIs like Google’s Cloud Vision, Azure’s Machine Learning service, or Amazon’s GameOn. There are hundreds of them and they make building out your own code that much easier. There is no reason why you cannot invoke these APIs from any cloud, or really any computer on the Internet, but sometimes you will need the performance that comes from running in the same network and even the same data centre. If some cloud offers what you need, it can be just a bit quicker to do much of your computation and data storage there too.

Location

All of the clouds have data centres spread out around the globe. Microsoft Azure, for instance, has 54 regions and they carefully note where the data is “at rest” and which government has sovereignty. Perhaps you have a large collection of customers in one country. Perhaps the legal department has identified a special and particularly lucrative “feature” of the tax law of another one. There are dozens of strange and often quirky reasons why you might want your code running in one country over another. Most of these different data centres are clones of each other and it makes sense to stick with the same stack throughout the world. It just makes things simpler. The only caveat is that not all of the data centres are perfect clones and not all of the products are available in every location. – Read more

To Secure Multicloud Environments, First Acknowledge You Have a Problem

My Post (7).pngMulticloud environments change rapidly. Organizations need a security framework that is purpose-built for the cloud and that aligns with their digital transformation strategy.

Enterprise cloud adoption continues to increase rapidly. According to Gartner, expenditures toward enterprise IT cloud-based offerings is rising at almost triple the rate of spending on traditional, non-cloud solutions. The firm predicts that more than $1.3 trillion in IT spending will move to the cloud by 2022. As organizations increasingly make their digital transformation to the cloud, they are not only adopting cloud applications, they are moving important parts of their IT infrastructure, such as databases, to the cloud for an infrastructure-as-a-service model. But with this rapid shift to the cloud come new security challenges, especially when an organization has a multicloud environment.

Research shows that on average, companies use a mix of four or more public and private clouds. Many security professionals think they can simply take their traditional cybersecurity fundamentals, such as patching and scanning, and apply them to their multicloud environment to make their organization secure. While those fundamentals remain essential, they don’t address the reason that so many organizations today are struggling to secure their multicloud environments. The reason securing a multicloud environment is so difficult is because you have essentially handed off your operating environment to a third-party — Amazon Web Services, Azure, Google Cloud Platform, or another. As a security professional, you no longer have control over the infrastructure; you only have control at the application level or just above the operating system level.

It’s a true paradigm shift. Whereas in the past, security professionals had full control over their servers and data and were able to apply and enforce all their security best practices and principles, now they are at the mercy of the cloud provider. No longer owning the infrastructure or the platform, security professionals are discovering that they may not be able to use the same security tools they would have used in the past. It introduces the question, “What controls can I use in the cloud and at what level?”

Compounding the challenge, each cloud provider is now releasing its own, native security tools. While these native-built security tools may make it easier to secure that particular cloud environment, they can’t be used with the other clouds the organization relies upon. With each cloud provider releasing new tool sets at a rapid pace — often daily — enterprise security teams are racing to keep up. In addition, many security vendors have their own private cloud that runs across public cloud hybrids. Enterprise security teams are challenged with trying to interconnect all these clouds at a business level, as well as at the cloud ecosystem level in order to gain visibility and manage risk across all of them. The multicloud environment is a multiplier of complexity, and as a security professional, you’re held responsible for securing all of it.

Solving the Multicloud Security Puzzle
The first step in securing your multicloud environment is understanding that you have a problem. Many organizations have moved to the cloud so quickly that they’re just beginning to realize they haven’t built the necessary security programs and tools needed to scan and monitor across all their cloud environments. Next, make sure you know where your assets reside in the cloud and put protection around them, using a native approach. The native security tools offered by cloud providers have their advantages, but they don’t work across clouds. In a multicloud environment, you need the ability to bring all your different security tools under a single pane of glass for visibility, monitoring, and centralized control. Using security orchestration, automation, and response (SOAR) technologies, advanced analytics and machine learning, enterprise security teams can gain a single view of the threats, vulnerabilities, and perceived risks across their organization’s entire environment and create a central point for tracking security events and responding to alerts. [Editor’s note: Trustwave is one of a number of vendors that offer such services.]

It’s important to realize that as you bring all these tools together under a single pane of glass, you want to do it without having to send all your data to yet another cloud service. As much as possible, leave your data closest to where it’s being generated. Look for SOAR solutions that are designed to pull just the alert or a summarization of the data. Then, based on insights gained from analysis, pull only the data necessary to make a decision or increase the fidelity of the alert. There are some excellent cloud-native security incident and event management (SIEM) tools, but you want to make sure the data you have feeding into them is configured correctly.

Of course, security fundamentals also remain essential in a multicloud environment. Many organizations today aren’t performing basic security hygiene for their databases, which is alarming. Scan the cloud, and consistently scan and monitor your databases from both an event and log perspective to see if you have open, inherent risks.

Finally, perhaps the most important aspect of securing a multicloud environment is to make sure your security leaders are included in the decision-making process early whenever a business unit is considering adopting a new, cloud-based service or application. Too often, the security team is looped into the process too late, which causes a lot of inefficiencies and rework when an incorrect configuration or security lapse early on in the deployment process cascades to cause security vulnerabilities elsewhere. – Read more

 

How artificial intelligence and machine learning changed the SaaS industry

My Post (6)Yair Green, CTO at GlobalDots, explains how artificial intelligence and machine learning changed the Software-as-a-Service industry

There is no single moment when SaaS (Software-as-a-Service) arrived, because SaaS is a concept that involves numerous components. SaaS has evolved to the point where vendors and suppliers manage their own software and no installation is required because software is distributed instantaneously over the internet (via the cloud). Cloud computing allows businesses to consume computing resources over the internet as a utility — in much the same way they consume water or electricity. The SaaS-based cloud model now offers businesses significant efficiencies and cost savings, although different sectors have moved towards SaaS models at different speeds. In technical terms, SaaS relies on cloud delivery at scale, a minimum degree of widely available connectivity and enterprise-grade security.

And SaaS doesn’t sit still. As part of this continuous evolution, both artificial intelligence and machine learning are playing their respective roles as they become an integral part of the SaaS landscape.

Data

Historically, you distributed software to the consumers and customers, but you didn’t get the insight regarding how they leverage your software — which features are in use and those that are not. When providing your Software-as-a-Service, you also have lots of data and insights that can help you improve your service. You can therefore use this information to provide insight to your customers, you are better able to understand usage patterns and, ultimately, be able to use this data to give intelligent feedback. The SaaS era coincides with the almost ubiquitous concept of big data. And SaaS, which is able to leverage AI and ML techniques, has a distinct advantage over the software provision days of old — the software provider now has access to aggregated data from different customers, which he can leverage to build a better service.

Companies now hold significant volumes of data from customers all in one place. Artificial intelligence and machine learning enables a more automated means of mass data processing. Gartner defines big data by not only volume, but also by its variety and velocity.

Variety refers to the different media we use to represent data (beyond simple figures) and velocity is determined by the speed at which data is collected, analysed and implemented. The ultimate reality is that IT teams are dealing with increasing amounts of data and a variety of tools to monitor that data – which can mean significant delays in identifying and solving issues. And with the whole area of IT operations being challenged by this rapid data growth (that must be captured, analysed and acted on), many businesses are turning to AI solutions to help prevent, identify and resolve any potential outages more quickly.

Marketing is particularly well placed to leverage AI and ML techniques. The data SaaS companies collect need to be relevant and recent. The more up to date the data is the more efficiently it can be implemented. Large corporations can access data collected through loyalty programs and cross-promotional activities, and smaller businesses can acquire data through customer surveys, online tracking or by competitor analysis. AI/ML solutions can certainly be a golden opportunity for businesses to broaden their perspective on potential customers.

Automation

For B2B customer-centric businesses, AI allows more functions which may previously have had a manual component to be automated – for example, it enables them to automate many customer experience processes, such as training and onboarding, marketing campaigns and ongoing customer service. Artificial intelligence essentially aggregates large quantities of data for example, customer data and filters it into automatic processes. Customer service AI platforms like chatbots, which respond to and troubleshoot customer inquiries automatically, enable customer service departments to take on additional inquiries. That’s great news for revenue retention and churn reduction, as customers tend to show a heightened interest in a purchase, following a positive customer service experience.

Likewise, negative customer service experience is a good way to get rid of customers. Supplementing AI technology with your customer service team can target the seamless cross-section between convenience, problem-solving and human experience — a typical example here could include using machine learning to automate aspects of customer service (especially self-serve).

The main UX challenge of SaaS is remoteness. Artificial intelligence can help to alleviate that sense of remoteness whilst delivering a more satisfying experience to the customer. There has been a lot of scaremongering with respect to machines taking jobs from humans, and that AI will bring about automation in practically all walks of working life — however, the more likely scenario is that AI will deliver most value when it is deployed in conjunction with human beings. SaaS can, and should, manage those interactions which can be handled automatically (classic SaaS) and those which require human intervention. AI-augmented human interactions can drive SaaS interactions too. – Read more

10 Software Founders Share The 5 Things You Need To Know Before You Start a SAAS

My Post (5)Have you ever had an idea for a new app or software, but just dropped it because you didn’t know where to begin?

Many people have the same experience. Wouldn’t it be fantastic to turn that idea into a successful app?

Authority Magazine recently interviewed more than fifty founders who had an idea for an app or software and took that idea and created a flourishing business.

Among other topics, the SAAS founders shared the five things you need to know if you want to start a SAAS.

Here are ten highlights of these interviews:

Dennis Cail 

1. Know why you are starting your app before you start. I’m a huge fan of Simon Sinek’s book, Start with Why. The premise is you have to start with the Why to get to the How and What. The Why for Zirtue is to create a more financially inclusive world by mobilizing and digitizing loans between friends and family.

2. Know your blind spots and solve for your gaps. This requires you to part ways with your ego and admit your weaknesses. A good example of this is my decision to partner with my co-founder, Michael Seay. Michael is a strong financial engineer and I am extremely efficient as a technical engineer. This brings immediate value and traction to our FinTech company and that combination of skill sets has allowed us build and scale quickly.

3. Know that you can’t do it alone. This is an extension of know you’re your blind spots. I couldn’t do anything I do without a strong team around me.

4. Know your market and if you don’t know it, learn it fast. An example of this would be all the research we did prior to launching Zirtue to understand the market we were seeking to disrupt. Simple premise being you don’t have a business unless you have a clear market to support that business.

5. Know that bad news never gets better with time. I am a huge fan of transparency and I have learned that if you want your stakeholders (i.e. investors, employees, users, etc.) to trust you, you have to trust them with the bad news as much as you trust them with the good news. Just have a mitigation plan and always be willing to ask for help when you need it. Leadership pride and ego is the death of most high potential companies.

  1. Start with why — I know people are sick of hearing this but people keep saying it because it’s still the most relevant advice. Why would anyone care? Why this solution? Why now? Why you?
  2. Get out and ask people — but make sure you’re asking the right questions. I asked lots of questions, and was able to get to the core of the issue.
  3. Know SOMETHING about development — you don’t have to be a coder (although that certainly helps), but you need to understand the development process, how things work, how developers think.
  4. Learn how to sell — I’ve been in sales most of my career and I love it. I solve people’s problems and they pay me, it’s great! But so many founders are afraid of the sales process. You HAVE to be able to sell yourself and your ideas to potential cofounders, investors, employees and most importantly to your target audience. If you can’t sell, you’ll have a very hard time.
  5. Be able to be ok without perfection — figure out the core thing you are providing and build that, come out from behind the computer and show it to people, get feedback and be willing to mold it to what the market needs and will pay for, not what you think is the right solution.

Read more

Cloud technology is fueling the enterprise with changing work patterns

My Post (4).pngHow shifting to cloud technology enables enterprises to function in a much better way?

What you may be getting wrong about cybersecurity

My Post (3)Attention-grabbing cyberattacks that use fiendish exploits are probably not the kind of threat that should be your main concern – here’s what your organization should focus on instead

When we hear about breaches, we assume that attackers used some never-before-seen, zero-day exploit to breach our defenses. This situation is normally far from the truth. While it is true that nation-states hold onto tastily crafted zero days that they use to infiltrate the most nationally significant targets, those targets are not you. And they’re probably not your organization, either.

At this year’s Virus Bulletin Conference, much like in years past, we were regaled with many tales of attacks against financially important, high-profile targets. But in the end, the bad actors didn’t get in with the scariest ’sploits. They got in with a phishing email, or, as in a case that one presenter from RiskIQ highlighted, they used wide-open permissions within a very popular cloud resource.

The truth is that the soft underbelly of the security industry consists of hackers taking the path of least resistance: quite often this path is paved with misconfigured security software, human error, or other operational security issues. In other words, it’s not super-“l33t” hackers; it’s you.

Even if you think you’re doing everything right within your own organization, that may still not be enough. While you may have thoroughly secured your own network, those you interact with may not be so locked down. You may think that you’ve successfully eschewed third-party software, that you don’t use the cloud for collaboration, so you’re safe in your enclave. However, third parties situated within your supply chain may be using cloud services in ways that endanger you. And sometimes neither you nor they even know that this situation has created significant risk to both of your environments.

Not to worry, you’re not alone, and there are things you can do about it.

High-profile breaches these days often start with third parties you use. While you might have the greatest security team out there, maybe they don’t. – Read more

How Encryption Is Solving Cloud Computing’s Greatest Challenge

My Post - 2019-11-27T114812.150.png

Keeping our information safe in the cloud is a huge challenge for cybersecurity vendors. End-to-end encryption can do that in a way that still leaves the data usable.

By now, you already know that cloud computing is a really big deal.

More than a quarter of a trillion dollars will be spent on public cloud services in 2019, and they’ll account for a third of global businesses’ overall IT budgets. Researchers believe that nearly half of all corporate workloads ran on various cloud services in 2017. But that will increase to 55% in 2019, and 94% by 2021.

In other words, nearly all of our data will move to the cloud within the next two years.

That’s great news for the public cloud providers. In the quarterly earnings report Microsoft (NASDAQ:MSFT) delivered Wednesday, it reported that Azure sales grew 63% in constant currency and said its Intelligent Cloud division now provides a third of overall revenue. Amazon (NASDAQ:AMZN) similarly noted in its earnings call that its Web Services group grew 35% annually and now accounts for 28% of its top line.

In short, there’s big money being made in providing the cloud’s infrastructure.

The cybersecurity opportunity

Due to this mass migration of data to the cloud, there’s never been a greater need for innovative cybersecurity companies. But as the market evolves, endpoint security antivirus software from Norton and enterprise firewalls from Palo Alto Networks just won’t cut it anymore. Our cloud-housed data needs new tools to protect it from data breaches and hackers — and a new wave of companies is beginning to step up to address this need.

Zscaler (NASDAQ:ZS) is one of them. Based on the premise that network firewall hardware will soon become inadequate, Zscaler offers a cloud Secure Web Gateway entirely through its cloud-based security layer — no hardware required. This decentralizes the cybersecurity protection, allowing the data to flow back and forth from the public cloud rather than redirecting it to clients’ own physical data centers.

A similar and complementary approach is to disguise all of the data that’s flying through the cloud, so it would appear as gibberish to hackers who intercept it. End-to-end encryption is gaining traction as a way to encrypt sensitive data like financial accounts or medical health records.

But there’s an added bonus: The encrypted data can still be computed upon. One of the biggest advantages offered by the cloud is the application of machine-learning algorithms, which can make correlations between data points and draw important conclusions. Rather than leaving the sensitive and most-valuable information locked away in on-premise vaults, why not encrypt it so it can still be used for AI calculations in cloud-computing data centers?

This is exactly the approach being pursued by cybersecurity company PreVeil. Born out of research done at MIT, PreVeil’s end-to-end encryption could redefine cloud-based cybersecurity in a way that doesn’t interfere with workflows (i.e. you’ll never even notice it’s running) and still allow for machine-learning applications.

I recently spoke with PreVeil co-founder and Chief Technology Officer Raluca Popa. Raluca is developing PreVeil’s technology and recently won MIT’s prestigious Innovators Under 35 award, previous recipients of which include CRISPR inventor Feng Zhang and Tesla co-founder JB Straubel.

In our conversation, Raluca describes the current state of cybersecurity, explains why end-to-end encryption will be important, and lists a few things individual investors interested in the space should be watching. – Read more

Who is responsible for security in the public cloud?

My Post - 2019-11-27T113404.679.png

Who is responsible for the security of data stored in the cloud: the business, or the cloud provider?