Five Ways Technology Helps You Accelerate Your Commercial Real Estate Business

My Post - 2020-01-31T102647.990.pngAre you ready to be a part of a digital transformation in commercial real estate (CRE)? If your answer is yes, then great! It’s important to be aware of the latest CRE technologies so that you can decide what’s best for your business and discover tools to give you an edge over the competition. A pen and paper or a spreadsheet will only get you so far. Today’s technologies will help you stay on top of things as you grow and scale your CRE business.

If you’re not sure whether you want to get involved in new and ever-changing technologies, we get it — change is hard! We understand that you’ve been successfully running your commercial real estate business using the technology you already have. However, as new CRE technologies develop, there’s a whole new set of tools at your disposal. We strongly encourage you to give newer technologies a chance. Here’s why:

1. Cloud-based software makes information easily accessible

Do you ever struggle to access necessary information? Maybe one of your investors needs to know how their investments are performing, but you can’t find the documents you need in order to crunch the numbers for them. Or maybe you need to know the terms of a lease on the spot, but the lease is sitting in your desk drawer in your office on the other side of town. Whether it’s a misplaced file folder, a disappearing document or excessive reliance on a team member to remember certain information, we’ve all been there:

With today’s technology, it’s easier than ever to keep all your crucial information centralized in one place. You can organize and access your portfolio data, tenant information, budgets and lease terms all in one place. With cloud-based technologies, you can even access documents and information from any device.

Having your information easily accessible can save time, which is important when tenants and stakeholders are waiting on you. It also helps with collaboration, because everyone on your team can access information when they need it without having to rely on others.

To get started using cloud-based software, you simply have to find one that meets your needs and import your data. Set it up so that you can reach it from your various devices (e.g., your work computer for when you’re in the office and your phone for when you’re out and about).

2. You can keep communication organized

Have you ever been in this scenario? Someone from your team speaks with a tenant — let’s say the tenant is discussing whether or not they intend to renew their lease in the coming months. Your team member goes on vacation the next week, and you’re wondering whether the tenant is happy in their current lease or not, but you can’t easily find out without repeating the conversation with the tenant.

Now, there are easy-to-use programs that can help you manage tenant relationships in a variety of ways. Your team can quickly and easily record conversations with tenants in a centralized tenant management platform. When combined with friendly and caring management, these types of tools can help your business build better relationships and increase renewal rates and tenant retention.

There are many options for tenant management software, and a simple online search should give you a great place to start. If you have a lot of tenants, this can be a game-changer.

3. You can get automated alerts when something requires your attention

When you’re managing several commercial real estate properties, it’s easy to miss things. Whether it’s a missed rent escalation payment or an imbalance in your real estate portfolio, you need to stay on top of all the small and large issues that require your attention — no matter how many properties you manage.

With today’s technology, you can receive timely, helpful alerts and reminders. With systems like this, you’ll never have to wonder when lease escalations are due or loans are coming up for renewal. This technology allows you to stay in control of your fast-moving business without stressing about the tasks you may have forgotten.

When researching real estate analytics tools, make sure you settle on a solution that provides automated alerts and reminders.

4. You can easily collect and view data to make future predictions 

In the past, many landlords had their important information stored either on paper or in disconnected files, making it hard to look at all their data as a whole. And even today, with QuickBooks and other accounting software, most landlords lack the specific algorithms and tools that they need to glean valuable data that would help them make predictions and plans for the future of their CRE business.

Along with keeping your data in a cloud-based centralized location, you should look for CRE analytics software that will let you look at all your data as a whole. When doing your research, we suggest that you schedule a demo (if possible) to see just how your data can be visualized. Is the data organized in a helpful, informative way? If so, that may be the tool you need to better your business. – Read more

How to Get More from Your Data in 2020

My Post - 2020-01-31T101220.626.pngAs organizations look for ways to drive flexibility, agility, and innovation, they can expect to see these three trends in the coming year.

The advancement in technologies such as the Internet of Things (IoT), wearable technologies, self-driving vehicles, and mobile technologies, supported by 5G connectivity, has led to the generation of large volumes of data. That data has grown unwieldy across systems, transcending data centers, cloud, and, more recently, to the edge.

In 2020, businesses will increasingly demand capabilities that enable them to achieve digital transformation through secured storage, faster integration, and better data discovery process. The adoption of AI-driven analytics supported by cognitive computing capabilities such as machine learning (ML) has expedited business insight delivery from low-latency to real-time analytics, resulting in faster time-to-market.

Companies looking to leverage these up-to-the-minute actionable insights in 2020 have to adapt newer trends such as data fabric/mesh, digital twins, and multicloud strategy to stay ahead of the competition. As organizations continue to look for ways to drive flexibility, agility, and innovation, they can expect to see these three trends in the coming year:

2020 Trend #1: Data fabric goes dynamic to become data mesh

Data fabric allows unimpeded access and sharing of data across distributed computing systems by means of a single, secured, and controlled data management framework. Many large companies run multiple applications for their business requirements, resulting in the collection of large volumes of structured, semistructured, and unstructured data. This data is siloed across diverse data sources such as transactional databases, data warehouses, data lakes, and cloud storage.

A data fabric architecture is designed to stitch together historical and current data across multiple data silos to produce a uniform and unified business view of the data. It provides an elegant solution to the complex IT challenges in handling enormous amount of data from disparate sources without having to replicate all of the data into yet another repository. This feat is accomplished through a combination of data integration, data virtualization, and data management technologies to create a unified semantic data layer that aids many business processes (such as accelerating data preparation and facilitating data science).

Increasingly, as data fabric shifts from static to dynamic infrastructure, it develops into what is called a data mesh. Data mesh is a distributed data architecture that follows a metadata-driven approach and is supported by machine learning capabilities. It is a tailor-made distributed ecosystem with reusable data services, centralized policy of governance, and dynamic data pipelines. The chief notion for data mesh is that ownership of domain data is distributed across different business units in a self-serve, consumable format. In other words, data is owned at the domain and these domain datasets are made available for efficient utilization across different teams.

Another important aspect of data mesh is its globally available centralized discovery system (also known as its data catalog). Using the data catalog, multiple teams looking for insight can access the data discovery system to obtain information such as available data in the system, point of origin, data owner, sample datasets, and metadata information. The domain data is indexed in this centralized registry system for quick discoverability. Finally, for data to be congruous across domains, data mesh focuses on delivering interoperability and standards for addressability between domain datasets in a polyglot ecosystem.

2020 Trend #2: Digital twins at the edge leads to real-time analytics

Digital twin is a digital facsimile or virtualized simulation of a physical object or system from telemetry data pieced together by sensors, via modeling software like computer-aided design (CAD). Digital twins can mimic the behavior of an automobile, industrial device, or human, and can be coalesced to mimic an engineering operation. With the genesis of technologies such as AI, ML, cognitive computing, and the Industrial Internet of Things (IIoT), the digital twin technology is creating unparalleled possibilities leading to innovative business concepts.

Digital twins can be integrated to form an intricate real-world setting while allowing companies to connect with individual digital twins that cater to an internal and/or external mechanism of an operation. Dynamic and historical data generated from IoT sensors offer insights about actual industrial operation and environment through real-time data feeds that can be leveraged by IoT applications (such as edge computing) later.

Digital twins are expected to be as accurately responsive as their physical equal. Live data generated and processed at the local device level by edge computing enables a latency-sensitive operation. Placing the digital twins from cloud to the outer boundaries of the network allow near-time latency, real-time analytics, data privacy, detection of operational incongruity, and failure prediction. The digital twins, complemented by edge computing, enthused by affordable sensor technology and augmented by computational competences, help companies accelerate product development process, boost efficiency, reduce cloud storage costs, and build a comprehensive portfolio of products.

2020 Trend #3: Multicloud provides best-in-class solutions

Multicloud involves using cloud services from multiple public cloud managed service providers (MSPs) in a single network architecture to attain the optimum mixture of near-time latency, cost, and other key metrics.

Multicloud adoption was initially driven by availability and performance as well as avoidance of vendor lock-in so organizations could benefit from best-in-class solutions. These days, companies look to cloud MSPs to support services for better security and failover, to meet data governance requirements, and to avoid downtime. – Read more

What is the right way to handle Operations, Data Quality and Data Availability issues? – Data in the Cloud

My Post - 2020-01-30T110336.009.pngMoving your data pipelines from an on-premise solution to a public cloud provider can be a very daunting endeavour. However, for many businesses, the benefits of moving to the cloud far outweigh the risks but what exactly are the problems that data engineers and software developers are likely to run into? What blockers and pitfalls are they likely to run into? How should the team be set up to maximise these benefits and make sure the team is set up for success?

Before answering these questions, let us first explore the tremendous potential of moving to a cloud platform and why cloud migration is increasing so rapidly.

Cost Reduction. Owning and maintaining your own Data Centre can be expensive. As well as the hardware refresh costs, there is the overhead of having to manage outages for software upgrades and physical fixes. Moving to the cloud offers the potential to manage and accurately predict your costs.

Flexibility. Perhaps one of the biggest motivators for migrating to a cloud platform is the flexibility and reliability it offers. Multiple server types, predefined machine images and the latest software versions are all within easy reach with reliability built into your service.

Scalability. Having the ability to gain that extra bit of computing power when you most need it (and then dropping back down again) is a major factor for cloud adoption.

Disaster Recovery/Security. Data Centre based computing requires additional hardware and storage in an external location as part of a full and proper disaster recovery strategy. It also requires a mechanism for maintaining the data transfer to ensure (and prove) that no data has been lost. This is taken care of in the cloud as multiple copies are taken both availability zones and regions to ensure restoration can be done as quickly as possible.

Immediately useable (and useful) set of tools. Most of the big cloud providers offer a set of tools that can be utilised on the platform and can be extremely useful for getting your applications up and running very quickly. Everything from networking to Machine Learning tools can be used (at a price).

While moving to the cloud is clearly good from a business point of view, how does it affect the way data engineers have to work?

 What is Data Engineering?

Data engineers are the designers, builders and managers of data pipelines. They develop the architecture, the processes and own the performance and data quality of the overall solution. To that end, they need to be specialists in architecting distributed systems and creating reliable pipelines, combining data sources and building and maintaining data stores.

The role has evolved in the last few years as software engineers have been required to learn more about data and traditional database engineers have been required to learn software engineering languages as businesses have moved away from enterprise warehouse solutions to distributed ‘big data’ pipelines.

Fig 1. Skills required by a Data Engineer

As such, data engineers require skills in a number of technical disciplines. These include scripting languages (such as LINUX and Python), object-oriented programming skills (particularly Java and Scala) and of course SQL and how the syntax varies between different applications. It also requires an understanding of distributed systems, data ingestion and processing frameworks and storage engines. Experienced data engineers have knowledge of the strengths and weaknesses of each tool and what it is best used for. There is also a requirement to know the basics of DevOps, particularly when having to install new tools, running statistical experiments and implementing machine learning for Data Scientists.

So, with all that skills and knowledge, what is the problem? Surely migrating straight to the cloud is easy?

Well not quite. Despite a decent knowledge of operations, Data Engineers as NOT DevOps. They do not have the deep level of understanding of networks, VPCs, subnets, security and infrastructure languages (like Terraform). Plus, as more companies look to move to a multi-cloud strategy, the complexity of cloud account structures requires specialist knowledge. There is also an increasing requirement to help users navigate their way around the intricacies that occur from using multiple data mining tools. Analysts in the past have never had to worry about how big a cluster must be to make sure their query completes in a reasonable time, nor have they had to interpret a SQL failure message that reads like a java run-time error. The development feature teams often do not have the time to help with this so something else is required.

Data and Application Operations (DOPS & APO)

DOPS work with engineers and network teams and are responsible for the support of the managed data and shared cloud accounts. This includes VPCs, Subnets, Identity and Access Management, White and Blacklisting and ensuring account security is compliant with the governance team. They also support application deployments, are the point of contact for any infrastructure issue resolution and are the focal point for upgrades and patching (where required).

They also provide a vital service in helping to manage the cloud costs.  Developers (and testers for that matter) have a habit of launching clusters to serve their needs – in the development, testing and production environments – but then often forget to tear them down afterward. This means the company can end up spending a fortune of virtual instances that are simply not being used. The DOPS team will monitor, alert and generally police this to make sure it does not occur.

The APO team are more focused on supporting the teams that are using the data. If you run an application that auto-scales a cluster based on the CPU utilisation needed for a query to complete, it is within the interests of the company to have someone who is an expert in query optimization, or the costs are going to spike with poorly written queries. That is where the APO team come in. They are experts in not only rewriting queries for speed but for teaching the users how to do this themselves. They also monitor query and table usage so that a deprecation program can be created on low usage tables, as well as directly supporting engineering teams with ‘proof of concepts’ for new external applications. With the rate of new products entering the data processing market, providing a service to evaluate new products is vital and ensures continued innovation within the engineering team.

So DOPS ensure the cloud infrastructure is supported across all the data teams and APO ensures that the applications and users are supported.

Perfect. Now you are all set up to fully utilize the power of the cloud. Or are you?

What about data quality and data availability?

As the business world starts to rely more on more on machine learning, the accuracy of the underlying data that ML models are trained on has become far more prevalent.

It is no longer acceptable to have ‘mostly’ good data; even the smallest amount of ‘bad’ data can cause inaccuracies in predictive analytics.

As data engineers, we bear the brunt of any criticism and rightly so – data scientists often bemoan the fact that much of their time is spent cleaning up data rather than producing the models they are trained to do. We are the first part of a long chain and the world of data engineering has to embrace this responsibility.

This is the usual timeline for a Production failure:

  1. Production Support are alerted to a failure in the middle of the night
  2. They apply a ‘Band-aid’ fix to get the application up and running again
  3. The next day they inform the development team who own the code to assess options
  4. The development team then plan the reprocessing of bad data to stop users from having to halt their work
  5. A permanent fix is suggested, estimated and then put on the backlog (often never to be seen again!)

The other issue with data quality is that feature development teams can spend multiple days within a sprint simply trying to get to the bottom of failures. This means that promised roadmap items get pushed further and further back, making the teams less efficient and causing frustration and mistrust from the stakeholders.

So, what can we do about it?

Step forward the Data Reliability Engineering team!

Data Reliability Engineering (DRE)

DRE is what you get when you treat data operations as a software engineering problem. Using the philosophy of SRE, Data Reliability Engineers are 20% operations and 80%, developers. This is not about being a production support team – this is about being a talented and experienced development team that specialize in data pipelines across multiple technical disciplines.

The 6-step mission of DRE is:

  1. To apply engineering practices to identify and correct data pipeline failures
  2. To use specialist knowledge to analyse pipelines for weaknesses and potential failure points and fix them
  3. To determine better ways of coping with failures and increase automation of reprocessing functionality
  4. To work with pipeline developers to advise of potential DQ issues with new designs
  5. Utilize and contribute to Open Source DQ Software products
  6. Improve the ‘first to know rate’ for DQ issues

So, the DRE team own the failure, the fix and the message out to users. They can call in the feature team developers help if specialist knowledge is required but aim to handle in-house as much as possible thus freeing feature teams to continue with their roadmap. – Read more