I’ll Co-Pilot That

Yesterday I witnessed the first behavioural example of the power of how AI is transforming new ways of working.
In a conversation to fact check some information a colleague told me they would “co-pilot that” rather than physical browse or use the corporate search.
I have been involved in various virtual agent projects where users are directed to engage with chatbots and similar functionality but here was an example of someone’s natural behaviour now using Co-Pilot (other chatbots are available) rather than, what was traditional methods.
It got me thinking that an internal communications or people director’s new best friend needs to be those that are beginning to write the internal code and programming for these language models.
I’ve always been amazed at the lack of interest senior leaders show in the entire process of searching for and sourcing information. The old hierarchy seemed to suffice for trickling down information, but I wonder if this will hold true in the future.
With so much information and data available it’s getting harder to nail down the truth. Our defence against misinformation is steadily weakening. A generation of social media we have emerged with less resilience against deceit and untrustworthy information.
Way back in the early days of Yammer (and other internal social media platforms) organisations were caught on the hop by the power and influence of social media, both internally and externally. Big technology leaps have a massive effect on the information supply and the development of AI within organisations is no different. Those responsible for determining the algorithm for the organisation holds great power.
Who decides to write and validate this. Who holds the pen controls the access to information to an organisation’s corporate voice and memory.
For me it is fascinating not only how we deal with this from a technology perspective, from strategy, governance and implementation, but also how we deal with the new behaviours this develops.
By its nature the programming behind these features will try to understand your goals, needs, beliefs etc (dependent upon various regulations). All the current Chatbots mainly use the language of the calm oracle, being patient and understanding. Cast forward to the future are we going to see these virtual agents develop personalities based on your temperament! Will they understand your resilience or trusting nature. And how do our behaviours then change to deal with the personality types of the chatbots we are developing. It presents a fundamental difference to the ways we approach behaviours in an organisation.
