As we continue the end of year review on all things tech, key topics that spring to mind are digital ethics and progress of AI in people related technologies. People tech is all HR, recruitment and similar technology that would enable businesses hire, manage and plan their key asset “people”. With so much going on with new suppliers coming out consistently it is very difficult for businesses to understand which technology is good, i.e. ethical with data, code and algorithms vs. which isn’t.
First thing to clarify though is that AI (artificial intelligence) is a huge title for most people tech these days, it is abused more often than it should resulting in confusion for businesses (buyers) who simply may not have the time to keep on top of tech or research it before buying, typically costing them huge resources. So, to clarify, AI has several strands and one of them is machine learning, just like automation is another – these two are significantly highest in use at the moment in people tech, where other forms of AI are more relevant / used currently in other sectors. As an example, autonomous cars use robotics and other relevant strands of AI.
Now regardless of the use of AI and the strand of it, especially when it concerns algorithm building stages it is extremely important for every developer / tech business to not just think about “ethics” and “biases” but to actually implement practices in place that would help them not only tackle their own challenges regards to ethics and biases, but also their employees and users. This truly allows them to build / code for purpose-driven, value-add commercial product. Increasingly a lot of experts are talking about this issue, from TechUK committees that I form part of to IEEE guidelines I am part of globally, there are a lot of experts, individuals and organisations constantly talking about this.
However, very little has been seen in terms of action and so for my part I am implementing “practising what I preach” in my technology already. And whilst we are a startup, and it does add a couple hours additionally to my time reviewing the code for new features etc, it is very satisfying to know that it comes from a place of supporting users. In addition, something we do is careful data use and management i.e. we will strictly only use the data that helps our users with analytics (based on what our platform offers) and for a better experience. But how can larger tech companies and software houses implement this?
In fact, I believe that the larger the business the easier it should be having a process and resource that understand the desired outputs of the business vision, and how it supports its customers whilst being an in-house ethics and bias reviewer. This gives businesses internally a lot of power to follow guidelines drawn by governments and other organisations working actively in supporting framework building for the same. 2019 I know will be a key year for a lot of growth in digitisation, automation, augmented analytics and blockchain. So, I really hope that businesses stop talking about the fundamental challenges of digital and AI ethics and start building tools and frameworks to monitor these.
Bio of the author: Bhumika Zhaveri is a non-conventional and solutions driven technology entrepreneur and businesswoman. As an experienced HR Technologist with expertise in HR and Recruitment: Technology & Programme Management for Change & Transformation. Privileged to look at challenges differently than most due to versatile life, personal and professional experiences. She is actively involved with TechUK, IEEE for Data ethics, AI & digital committees and TechSheCan charter with PWC, Girls who code and similar women in stem organisations. Currently, she is also the Tech Advisor for Resume Foundation and Bridge of Hope, whilst also being a founding member of Digital Anthropology.