Balancing Act: Navigating the Promise and Pitfalls of AI and Emerging Technologies

Emerging technology and artificial intelligence present exciting opportunities in virtually every industry and field. From automating time-consuming, manual processes to generating information from a simple line of text, the potential use cases and capabilities these new tools offer can prompt organizations to rapidly adopt them.

Yet the enticing possibility of what could be can also cause companies to overlook the pitfalls and risks that inherently come along with emerging technology. When implementing new software solutions or tools, organizations must first lay the groundwork, assess and prepare for risks, leverage existing processes and technology, and develop a plan for the unexpected.

 

Laying the Groundwork

Once an organization identifies a new technology it plans to bring on board, there can often be questions of where to start. Moving too quickly may cause a company to miss small issues that could become a larger problem in the future, such as scalability, access control, and unexpected results. At the same time, moving too slowly can lead to unnecessary delays in deploying beneficial tools intended to help teams increase efficiency, as well as other missed opportunities.

That doesn’t mean leadership needs to have all the answers right away. Rather, there are key considerations it must evaluate, such as cybersecurity and regulatory compliance. Because the nature of security and compliance are evolving in response to emerging technology, organizations may not necessarily know the specifics of what each area requires; however, there are still best practices they can consider in preparation. These include, but are not limited to, the following:

  • Updating reporting procedures by making sure data is organized and readily available
  • Understanding and staying current on compliance requirements, such as SOC, ISO, HITRUST, etc.
  • Identifying potential user groups within the organization and scenarios to test new technology and AI tools

 

Assessing and Preparing for Risks

While readying themselves for evolving regulations and security considerations, organizations should also take steps to account for the current risk landscape. Data is the backbone of AI and emerging technology, and also remains a valuable commodity for cybercriminals and other bad actors. The ongoing challenge enterprises face is how to make data accessible to the stakeholders and systems that need it while maintaining effective security controls.

This starts with leveraging a leading control framework such as NIST CSF, ISO 27001, or NIST 800-53 to help understand and evaluate where and how the company stores data. A secure, cloud-based environment allows critical systems and personnel to use data as they need it while offering configurable security options. It also keeps data from being stored in silos like disconnected desktops and hard drives. But even the most secure systems still have a potential weak point: people.

The business must put controls in place to allow the proper people and software to access data at the appropriate level. Most organizations understand control principles well, with a basic tenant of “zero trust” — meaning people and software should have access only to the information they need. When technology like AI gets introduced, however, the matter becomes more complicated.

AI is only as good as the data it works with, making it imperative to have parameters in place to help ensure data is high quality and relevant to the function it needs to perform. If the technology were to begin parsing irrelevant — or inaccurate — information, it may lead to improper results. In a worst-case scenario, those results could make their way into reports used in decision-making and compliance, leading to undesirable outcomes.

Organizations also need to consider third-party applications and users. When working with external companies and software, there will likely be data and systems they need to access to function properly. As with controls for internal processes, the business should confirm third parties only have access to the information they need.

 

Leveraging Existing Processes and Technology

Even with the proper groundwork laid and security risks considered, there can still be growing pains when adopting new technology. While some software and AI may integrate with existing processes and procedures, organizations should still plan to train stakeholders and users throughout the implementation process. To help streamline the adoption of new software, companies may be able to adapt their existing polices and training methods used for other purposes.

For instance, organizations with robust bring-your-own-device (BYOD) and remote-work policies are likely to already have established data-management and control frameworks and requirements. Many, if not all, of those same policies are applicable to AI data use. Conversely, if a control framework around secure data handling is not in place, a company will need to develop and roll out newly created training to its employees and other stakeholders.

Systems integration also plays a key role with any new software. Comprehensive data governance is essential in working with data-driven technology, but that alone doesn’t mean new applications will work out of the proverbial box. From a practical standpoint, IT teams and users still need to perform testing and operations to make sure legacy systems are compatible with added software solutions. Doing so can help identify inefficiencies and potential security issues that may not have been evident right away.

 

Preparing for the Unknown

By its very nature, emerging technology comes with undiscovered potential. It also comes with unknown threats, and the unfortunate reality is that in the current risk landscape, cybersecurity incidents are a matter of “when” and not “if.” Despite novel, unprecedented cyber threats appearing with greater frequency than in the past, organizations can still prepare themselves for what to do in the event of a breach.

Developing and testing a comprehensive, proactive response plan can assist in organizing some of the chaos in the immediate aftermath of an incident. The plan should identify which individuals and teams are responsible for different aspects of the response and the steps they need to follow. It should also give guidance regarding clear communication about the incident, both internally and externally.

 

Planning for the Future

Emerging technology and AI provide plenty of reasons for organizations to be optimistic and hopeful about the benefits they offer. At the same time, in the midst of their enthusiasm and desire to adopt these new applications, it’s important to remember that companies don’t need to — and shouldn’t — take an all-or-nothing approach. Emerging technology and AI should be introduced gradually and in small-scale projects before being fully integrated into larger, enterprise-wide processes.

By embracing their existing operating procedures and looking for ways new tech can enhance them, it’s possible to benefit from novel solutions while remaining alert to unexpected pitfalls.

Posted in