Just Too Many Digital Chiefs

Like a medical specialist providing in-depth and expert care in a specific area, the tech industry has seen a similar shake-up in recent times, resulting in a plethora of high-sounding titles such as Chief Analytics Officer (CAO), Chief Artificial Intelligence Officer (CAIO), Chief Data Officer (CDO), Chief Digital Transformation Officer (CDTO), Chief Information Officer (CIO), Chief Information Security Officer (CISO), Chief Knowledge Officer (CKO), Chief Machine Learning Officer (CMLO), and Chief Technology Officer (CTO). This trend is ongoing, as evidenced by the myriad of executive programs offered by Ivy League colleges and training schools for those keen to qualify.

The rapid tech advancement has caught many enterprises off guard. The surge of chief titles like CAIO and CMLO appears to be a knee-jerk reaction to the phenomenal growth of generative AI. In the past few years, many CISO appointments were fast-tracked to comply with regulatory mandates in some parts of the world, requiring a dedicated chief for cybersecurity amidst escalating cyber breaches and privacy invasions. On the other hand, the once in-demand CKO hiring of the late 1990s is fast-fading, likely ousted by the CDO and CAO amid a shifting focus to big data and analytics. Lastly, the de facto tech chief, the CIO, has seen its technology portfolio mostly taken over by the CTO, often to spare focus on technology.

Obviously, we do not need a management professor to tell us that too many chiefs without a chief of the chiefs would be a grave mistake in corporate governance. For instance, should the CISO be accountable for the security of an AI system? Intuitively, yes, provided the CISO has veto power over the AI because accountability requires control. From frivolous data to business insights and invaluable knowledge, should the CKO be rejuvenated and made responsible for all these seemingly discrete domains, thus offloading responsibilities from and right-sizing the CIO and CDO? Ironically, does the CDTO really fit the bill of a digital chief with goals to transform business? Realistically, must all the chiefs bear the same titles and compensations if their job sizes differ?

Nobody would argue if the Chief Executive Officer (CEO) were to be the overall digital chief, given how tech has been transforming industries and businesses. A level closer to the head of the organization allows for more direct communication, level brainstorming, and faster decision-making. However, this is impractical given the day-to-day management chores. For non-tech, non-profit, and end-user enterprises, IT is mostly a tool, not a strategy, and an expense rather than an investment that hardly creeps into the KPIs (Key Performance Indicators) of the CEO. Also, it takes more than a tech-savvy CEO to oversee the work among the digital chiefs, dealing with operational issues and personnel conflicts.

It is an opportune time to rehash the chiefs’ departments if you have close to a double numeric of digital chiefs, especially when some have no direct reports. The CIO debuted in 1980, and the CTO in 1990, when the first batch of CIOs had already been functioning well for a decade before relinquishing their tech function to the CTO. The CIO nomenclature has suffered from a birth defect with a missing specific – Technology – despite it being a substantial part of their roles. Given the continuous advancement and escalating reliance on technology, it makes perfect sense for a new chief function, the Chief Information Technology Officer (CITO), to take on both portfolios. In fact, the CITO role has emerged in recent years as a response to the increasing importance of technology in organizations, likely evolving from the CIO and CTO roles.

There are CISOs reporting to an independent entity, such as the Board, CEO, or a corporate chief on risk management, citing autonomy without being undermined by the CIO or any other chief. Unlike audits, the CISO is not an inspect-and-control function; it is the inherent cybersecurity knowledge and skills that are most valued. The CISO should be an integral part of the CIO department, incorporating security design and operating requirements into any tech development. The CISO should also be the party to endorse tech implementation and operational changes. Checks and balances can be achieved through independent audits, external consultancy, and certifications like ISO 27001 Information Security Management System.

Data does not lie but stops short of saying anything if it is not clean. Like clean water to humans, pristine data is the lifeline to AI, and the CAIO, CDO, CAO, and CMLO, despite each taking a different spin on it. The CDO should define relevant policies for data ownership, cleansing, protection, sharing, and retention, govern and coordinate efforts among the business units to ensure compliance and resolve disputes. Separately, the CAO focuses on data analytics, using tools like Excel, Python, SQL, and SPSS to justify business actions and decisions and subsequently measure performance. Raw data is akin to unrefined ore; it’s abundant and contains potential value, but in its unprocessed state, it lacks clarity and insights. Combining the CDO and CAO functions into a Chief Data and Analytics Officer (CDAO) provides oversight and management controls for transforming raw data into valuable insights.

The CMLO, equipped with strong mathematics, statistics, and coding knowledge, builds algorithmic models for applications such as generative AI, behavior analysis, and pattern recognition. The CAIO, with a similar background, spearheads AI direction, strategies, ethical use, and staff training across the entire enterprise. It is an ecosystem where the chiefs interact and work to embed AI seamlessly in all business functions.

In the context of the CDTO, the latest kid on the block, Tech and Digital are not interchangeable. As the name implies, digital transformation aims to modernize the business by leveraging progressive tech advancements. Transformation is disruptive, often requiring mindset changes, new learning, and critical thinking to debureaucratize the organization. Besides possessing necessary business acumen, having a clear mandate and authority to make decisions is crucial for effectively addressing and overcoming objections. The emergence of the CDTO is timely, fueled by attainable technologies such as Cloud, RPA (Robotic Process Automation), next-generation ERP (Enterprise Resource Planning), and the prevalence of BPO (Business Process Outsourcing) that enable businesses to own their transformation.

Except for the CDTO, all tech chiefs have either a share of operational duties or a high stake in them. In a unified approach, tech-related activities such as strategic planning, manpower forecasting, and budgeting should be integrated and coordinated across the enterprise, rather than being siloed among separate digital chiefs. This collaborative approach ensures alignment, efficiency, and effective resource allocation, enabling the organization to achieve its goals and business priorities cohesively and strategically. As the saying goes, “A house divided against itself cannot stand.” By working together, we can build a strong and resilient organization that thrives in today’s fast-paced and competitive landscape.

Merging the CIO and CTO functions into CITO and combining the CDO and CAO into CDAO are pivotal steps prior to integrating the CAIO, CMLO, and CISO functions into the same CITO office. Partnership hinges on individuals, but an integrated system, once built, will be long-lasting regardless of personnel changes and how technology evolves. Transformation is not a transient function, and the CDTO, primarily a business function, should stay abreast of technological changes and continue to lead the effort.

With the optimized hierarchy, the CITO, with combined functions of CIO, CTO, CAIO, CMLO, and CISO, will report to the CEO or their deputy, as will the CDTO and CDAO with combined functions of CDO and CAO. Knowledge will become on-the-fly with proper safeguards when generative AI becomes more intelligent and widespread, thus diminishing the CKO’s role further.

Organizational changes are risky. Dealing with potentially inflated titles, re-designation, and job resizing may unsettle many incumbents. It reminds one of those heated debates between centralizing and decentralizing tech functions in a large enterprise. Ultimately, organizations persevering through these changes will benefit from agility to cost savings, clarity of ownership, accountability, less politicking, a healthier workplace, and, finally, emerging as leaders in their industry.



*Copyedited by ChatGPT, https://chat.openai.com/chat

IT Helpdesk Who Needs Help

Once, users commented: “It is the Helpdesk who needs help, not us.” Out of frustration? Maybe. But it certainly served as a wake-up call when technology was already an integral part of all enterprise functions, and yet, support services couldn’t live up to expectations. This sentiment resonated with my own experiences of below-par customer services in various verticals, suggesting that IT helpdesks in most enterprises are perceived as peripheral and non-strategic. Can we turn it around, and how?

Begin With The Technology Leader

In my previous organization, the IT Helpdesk provided a single point of contact for problem reports, general inquiries, complaints, and requests for resources, etc. With a captive user base of 38,000, the majority being digital natives or self-proclaimed IT literates who could literally argue for any advice we offer, the demand for support was high. Dealing with two major Enterprise Resource Planning (ERP) suites, multiple coding platforms, hundreds of Cloud and bespoke applications, over 2,000 wireless access points, 120,000 end-points, servers, network devices, and an average of 8,000 user tickets monthly, the job was demanding and thankless. Annual surveys showed little to be proud of, and staff morale was low. In the mid-2000s, an outsourcing trend emerged, appealing for cost savings and improved service levels. I was skeptical but dived in because expanding staffing was considered a fixed cost.

A technology leader, understandably overwhelmed by business politics and digital intricacies, could only afford direct oversight on limited strategic functions like business relationships, applications, infrastructure, and cybersecurity, not the helpdesk, despite its significant impact on user experience. This disconnect makes it hard for the technology leader to intervene in disastrous situations. If one chooses to do so based on filtered reports, the lack of insights would just prolong and limit systemic issues from being resolved at the root cause.

Enterprise IT tends to be highly compartmentalized by functions, with each one led by functional heads. The Help Desk, to be frank, is not glamorous. For the Help Desk, commanding the least respect in the enterprise hierarchy and having no authority over the priority of fellow engineers and heads of the functions responsible for the fixes desperately needed for users, this fuels more user frustrations.

Next, in terms of workforce, morale, and commitment, we cannot expect these personnel challenges in the helpdesk to diminish with outsourcing. In the unfortunate event of being stuck with a slacking third-party, you need the technology chief to pull their weight for prompt remediation. It is essential for the chief to commit undivided attention to helpdesk operations, cultivating a strategic relationship with the service provider, regularly reviewing service levels, and keeping a pulse on the ground for concerns and expectations for sustainable improvement and success.

Obsess to Serve and Services

The Helpdesk is a people business, where the users’ experience heavily impacts the perceived performance of the entire IT organization. While technical competency is essential, what differentiates an exceptional helpdesk from a mediocre one is a deep sense of urgency, empathy, passion to serve, and committed leadership. These factors go a long way toward user satisfaction, even when practical resolutions are not immediately possible at times.

Organizations that are truly obsessed with customer service not only act upon users’ feedback but proactively seek service enhancement. Here are some practices:

1. First-Call Resolution

Many of us have had the poor experience of making repeated calls to the helpdesk for the same issue when the advice given initially doesn’t work. First-Call Resolution defines the rate of problem resolution at the first contact. It is a sensible performance target for customer services to judge the accuracy of advice and attention to details. Also, it is an indicator that one should watch closely for the growing technical maturity of the helpdesk since the reported issues can range from technically trivial to complex and sophisticated. The higher the rate, the more technical capable the helpdesk are.

2. Minimal Referrals

An enterprise helpdesk will have to deal with a vast amount and variety of problems and user enquiries daily. Certainly not all the issues can be addressed by the agents at the front desk, and upon exercising due care, some cases may be referred to the backend engineers for advice. However, an effective helpdesk should function as a cushion to engineers. Excessive referrals without proper diagnostics can be a sign of incompetence or negligence of the helpdesk.

3. Call Listening

Despite the well-intended note that “your call will be recorded for service improvement purposes,” users are more concerned about immediate resolution, not service step up in the future. Giving that the first impression is likely a long-lasting one, every effort should be made to ensure a delightful user experience at the first call. Implementing call listening allows supervisors to join calls selectively, watching conversations in real-time and intervening if necessary for advice and immediate solutions.

4. Personal Touch

Despite the growing intelligence of AI-powered chatbots, nothing beats a personal touch with attentive and empathic listening for greater user satisfaction. Agents can identify themselves before interacting with the caller, leave a contact for a return call, verify understanding of reported issues, share possible causes to support the recommended actions to the users, and offer a personalized experience.

5. Mysterious Self-Audit

Many enterprises’ IT have experienced audit fatigue and yet another self-inflicted one will certainly set them to the verge of burn-out. Unlike most regularized audits, mysterious audit is impromptu and specific without prior notice to the helpdesk. It involves trained users appearing unexpectedly to assess the helpdesk’s listening, communication, problem-solving skills, and ability to manage difficult users and unreasonable demands. It helps flag out errand agents for specific coaching among many other systemic issues. Mysterious audit is lightweight and practical because it does not deal with the energy-sapping tasks like documentations and board reports, etc.

6. Self-Experiencing

Providing tech specialists an opportunity to serve as user-support agents allows them to experience digital services as users do, enhancing the design and friendliness of products. It just happens too often that missing details and miscommunications between the helpdesk and the tech. specialists causing  unnecessary delays to the fixes. Also, it is hard for the tech. specialists to appreciate what the poor experiences users may have with the digital services without a direct interaction with them.

In a highly competitive services sector, the one emerging and sustained at the top would have differentiated support services from the pack, not just products alone.

*Copyedited by ChatGPT, https://chat.openai.com/chat

Capacity-on-Demand (Part 2 of 2)

It’s important to acknowledge that not all servers in a data centre run at full capacity. Peak loads often fail to sustain, let alone utilize the extra resources provisioned by some system administrators to preempt server crashes during load peaks. What if we could harness and repurpose 20% of such idle capacity from a 1,000-server farm while enhancing service levels and adding value?

Server Virtualization

Many daily activities in a data centre involve moving (servers), adding (servers), and changing (server configurations), commonly known as MAC (Move, Add, Change) operations. These seemingly routine tasks become increasingly prevalent and complex in many large enterprises with a growing array of operating systems, databases, web and application services, and geographically dispersed data centres.

From hardware setup to software configuration, virtualization slices physical hardware into multiple programmable servers, each with its CPU, memory, and I/O. Strictly speaking, once automated, software work incurs no labour cost, allowing MAC activities to scale swiftly with cost-effectiveness, precise accuracy, and no boundaries.

Virtualization underpins a significant shift in data centre operations:

Firstly, we no longer need to oversize servers, knowing that CPU, memory, and storage resources can be dynamically adjusted. This, however, doesn’t diminish the importance of proper capacity sizing, but it eliminates the psychological “more is better” effect.

Secondly, we no longer need to panic when a server suffers from the infamous “crash of unknown cause.” A hot or cold standby server, utilizing harvested resources, can quickly minimize user impact.

Thirdly, cloning a server becomes effortless, especially when enforcing the same security settings across all servers, minimizing human oversights.

Fourthly, it serves as a kill switch during a suspicious cyberattack by taking a snapshot of the server and its memory map for forensic purposes before shutting it down to contain the exposure.

Workstation Enablement

High-end workstations are typically reserved at desktops for power users who work with large datasets in tasks like data modelling, analytics, simulation, and gaming. Thanks to significant advancements in chip technology, virtualization has gained substantial traction in high-performance computing (HPC). This allows more desktop users to have workstation capabilities and provides ready-to-use specialized HPC software, such as MATLAB, SPSS, AutoCAD, etc., maintained centrally without the hassle of per-unit installation. Both CPU- and GPU-intensive workloads are processed at the data centre, with screen changes, for example, transmitted back to the user on a lightweight desktop computer. Achieving decent performance largely depends on sufficient desktop bandwidth, with a minimum of 1 Gbit, based on my experience, assuming the enterprise has ample bandwidth within the data centre.

Network Virtualization

Computer networking primarily involves switching and routing data packets from source to destination. It seems simple, except when addressing MAC activities such as firewalling a group of servers at dispersed locations for a business unit dealing with sensitive data or filtering malicious traffic among desktops. The proliferation of IoT devices and surveillance cameras with delayed security patches only exacerbates the situation.

By creating logical boundaries at layer two for data switching or layer three for data routing among the servers in the data centre, users’ desktops, or specialized devices, one can easily insert either a physical or software-based firewall into the data path to protect workloads.

Crucial Requirement

While both the Cloud and Virtualization offer similar capabilities in agility within modern IT, the staff’s expertise in network and system architecture remains the most crucial requirement for the successful implementation and realization of the benefits. It is timely for enterprises to incorporate Generative AI into their technology workforce, allowing them to learn and grow together, promoting knowledge retention and transfer.

Capacity-on-Demand (Part 1 of 2)

Digital agility is of utmost importance in modern business, encompassing speed and responsiveness. For instance, a rapid turnaround to address an infrastructure bottleneck, a quick resolution to erroneous code, a prompt diagnosis of user-reported issues, or an immediate response to contain a cyberattack would undoubtedly be appealing. Nonetheless, achieving agility in a large enterprise is no easy task, and these efforts can be hampered by a risk-averse corporate culture, untimely policies, and staff competency.

I define Capacity-on-Demand as an organization’s ability to scale up digital capacity, specifically focusing on infrastructure capacity in this post, as and when it is required. A highly versatile, high-performing, and secure infrastructure is a crucial asset for any enterprise, with strict uptime and performance requirements often committed as service levels to their business partners by Enterprise IT.

However, this system works well only when the operating environment remains unchanged. As usage increases, businesses modernize, technologies become obsolete, and maintenance costs for aging equipment escalate, many enterprise technology chiefs are faced with the due diligence of upgrading their infrastructures approximately once every five years to keep up with user demand and application workloads.

But what alternatives exist when this upgrade entails intensive capital outlay for a system likely to be useful for only 60 months? Even with the blessing of new investment, the epic effort to commission the major upgrade, including technical design, prototyping, specifications, installation, and other administrative overheads, may amount to a woeful 18 months or more. The Return on Investment (ROI) in such a scenario is utterly inefficient!

Cloud Storage

From mirror copies to backup and archival copies of enterprise data, meeting operational and legal requirements necessitates provisioning nearly triple the storage capacity for every unit increase in data volume. In a large enterprise, this total can amount to tens of petabytes or even more. Dealing with such large-scale and unpredictable demands often leads us to consider Cloud storage. It offers elasticity and helps reduce data centre footprint. However, it also assumes no legal implications on data residency, and the organization must be willing to accept less desired contract terms on service levels, data privacy, liability, indemnity, security safeguards, and exit clauses.

Storage Leasing

Storage leasing presents a viable alternative if you possess the economy of scale, a mid- to long-term horizon, and a fairly accurate but non-committal year-by-year growth prediction during the contract period. These considerations are crucial for a cost-effective proposal.

Similar to Cloud storage, storage leasing helps alleviate capital constraints and smoothes out lumpy expenses in the budget plan over the years, a preferred approach by some finance chiefs. Additionally, you have the option to choose between finance lease with asset ownership or operating lease to save tedious efforts in asset keeping.

Sporadic Demands

Despite the forecasted storage growth rate, addressing urgent demands within a short notice necessitates pre-provisioning spare capacity onsite without activating it. I used to include such requirements in the leasing contract at a fraction of the total cost, enabling the option to turn it on and off as needed or normalize it as part of the forecasted growth, although the latter approach prevailed in my previous environment.

Access Speed

Does the access speed to the Cloud differ from onsite storage? It is a rather complex assessment. Apart from factors like drive technologies, data transfer protocols, and cache size, onsite storage in any end-user environment, where users and employees are mostly located within the enterprise, would provide a better user experience since the speed is not limited by Internet bandwidth. Additionally, we should consider the nature of data that nowadays, is predominantly machine-generated data such as transaction logs, user access records, and security events, etc. These voluminous and real-time data are latency-sensitive, consuming much of the Internet bandwidth, making it advisable to be located closest to the storage.

Storage Operations

Equipping the workforce with the necessary expertise and knowledge of proprietary tools to manage and operate Cloud or onsite storage is crucial. Cloud storage offers ease of provision and management, including storage provisioning, backup & recovery, and site redundancy, etc. However, I am hesitant about operating a black box in a heterogeneous environment without understanding its internal dynamics and having a plan for skill transfer. Storage is a significant component of the entire enterprise technology stack, and highly committed and collaborative efforts from the storage provider are essential for planning and successfully executing drills and post-reviews and avoiding not my problem syndrome.

Onsite storage will entail more technical management overheads compared to the Cloud. One can include the required expertise and make provisions in the contract to support the adopted solution. The service provider, backed by the principal, will have the most experienced personnel to support your organization. Once again, we must not overlook the importance of having a plan for skill transfer.

Mutual Trust

Technology leasing is not a novel concept. The key is to customize a contract to bridge the gap left by the Cloud. The initial journey may encounter challenges, but with shared goals and mutual trust, it can lead to a long-term win-win partnership. Throughout my experience, I have utilized both Cloud and onsite storage, ranging from file storage to block and object storage, and transitioning from SCSI to Fiber Channel connectivity and finally to all-flash drives, to meet my needs. At the end of each contract, there was a comprehensive review of the overall service performance and upcoming technologies, resulting in reduced data centre space and energy footprint, as well as lower per terabyte cost for the next phase of development. This approach also provides the right opportunity to give a new lease of life to the storage infrastructure.

Next Post

On-demand provisioning is far from complete without the agile provisioning of the server and network capacity which I will cover in the next post.

*Post is copyedited by ChatGPT, https://chat.openai.com/chat

Let Us Define IT Quality

The consequences of technology failures, such as system crash, cyber breach or sluggish apps performance can be devastating. It affects businesses, operations, customers and users. It could even be a life-or-death matter in the event of a disrupted surgical operation or a breached IoT sensor in an autonomous vehicle.

In my incubating years of career in management, I was quite struggle with the performance of IT. Not because we performed poorly as a team, but our positive results did not necessarily resonate well with the business. Could it be a stereotype in the community I supported, an excuse from the responsible party, or indeed substandard IT work?

In the storm of digital transformation today, both the Business and IT are tightly coupled to the extent that one’s performance is dependent on the other. For instances, an ill-formed data-driven workflow could not benefit from simply automation if we failed to have an integrated data source, and incoherent data sets will frustrate our customers upon receiving duplicated marketing materials in regardless of the system performance. In such situations when both parties’ performance is at stake, it could lead to many unpleasant arguments, fault findings and finger-pointing in the project room.

Performance is not measurable without the indicators defined, commonly known as Key Performance Indicators (KPIs) in many organizations. More specific to the technology as it cuts across a variety of business functions, the defined KPIs should align to the business goals and thus, encourage co-ownerships. It should also appeal to the watchful stakeholders, like the funding and risk management entities across the enterprise.

The Seven Performance Indicators

If I take a business perspective, a high-quality system shall deliver accuracy, performance, security, stability and be user-friendly.

Accuracy – it refers to the precision of the constructed system in meeting the specific business requirements and ensuring the important aspects of data integrity, authenticity and correctness in data processing and presentation of information. From my experience, data quality is a make or break to your project. It is also the hardest to deal with in an environment with dispersed data sets and multiple ownerships.

Performance – a high performing system will provide good and consistent response time to support the designed workload as agreed between the IT and Business. Typically, this would require adequately sized system capacity, optimized database queries and rigorous load-testing in various use scenarios before releasing it for general use.

Security – system security is utmost important. It concerns the required defending tools, best practices, controls, methodologies, and threats intelligence, etc. deployed to safeguard the digital assets, personal data and privacy. Any security events, real-time alerts and triggers to the intended responders should be defined prior to the system development. Besides the technical means, continuous trainings and user education are essential to mitigate the risk of humans commonly regarded as the weakest link in cyber defence.

Stability – a stable system will have minimal unscheduled disruption, typically quantified by the uptime per year as committed by the IT in agreement with the Business. There are numerous mechanisms, like having standby hardware, secondary site and dual transmission links, etc. to ensure continuous operations should any failure happen to the primary resource. The extent of redundancy is often a trade-off between the cost and business criticality, and again a decision of the IT and Business.

User-Friendliness – last but certainly not the least is user-friendliness that a poor user interface with inconsistent layouts, misleading error messages, and cluttered clickable actions will just annoy the users. Web design is a separate professional skill from IT and some organizations have resorted to external help on design thinking to address the issue.

As IT is mostly concerned with the continuous operation and changes to the system at the end of the day, there are additional quality attributes, like Scalability, Maintainability and Supportability to be cared for.  Unless we take further efforts to formalize it across the enterprise, these KPIs are, unfortunately, less known or perceived lower in priority to the business.

Scalability – it is like gauging the IT’s ability to commission additional system resources to cater for the projected increase in workload within just weeks or the sooner the better. It goes beyond just Cloud for the solution as one may need to review the respective scale up for the in-house supporting services like firewall, intrusion detection, load balancer, Internet bandwidth, transaction logging and data backup capacity, etc.

Maintainability – a hard-to-maintain system will be rigid to adapt to or unable to cope with the business changes without a major overhaul or huge investment. Addressing the issue would require a combination of supporting technologies, modular software design, technical skills and experience, etc.

Supportability – IT is a knowledge economy. Knowledge, skills and expertise (KSEs) prevail for a high-quality system. When a coding error, configuration mistake or operation oversight could result in dire consequences, do we ever concern if we have the required KSEs and human capital to support and sustain the continual operations of a newly adopted technology?

In summary, IT performance is a collective effort of the organization. It must be defined in the perspectives of the IT and Business with the objectives below:

1. Bring clarity of digital performance, objective assessment, and harmony across the enterprise in the pursuit of business transformation.

2. Nurture a digital-literate community for further work on technology governance and strategies to achieving the performance goals.

3. Make clear to the stakeholders for the essential investments and priorities of IT for the overall organizational performance, rather than technology performance.

Enterprise & IT

Let me contextualize “Enterprise” and “Enterprise IT” for the intended subjects to be written in this space. In my views, a large “Enterprise” tends to have tens to hundreds of thousands of digitally enabled employees and users, and a plethora of business functions, processes, digital solution and services. Striving hard to stay ahead among their peers, many of them have been investing aggressively in technologies for business growth, innovations, and service excellence. Such investments are often business driven, time sensitive and cyclic in nature. Depending on the digital maturity of the organizations, many business decisions on technology investments at the unit level may not have thorough considerations on the impact to the enterprise-wide infrastructure, software interoperability or availability of technical competency for continued operations. This sets out the greatest challenges and concerns for the “Enterprise IT”, a central office who is responsible for the governance, planning, management and operations of the entire IT landscape across the enterprise.

Besides specific thoughts to be shared to address the challenges above, you may wonder why many mothers and fathers working in IT I know do not recommend their children to follow their footsteps. In addition, why is there stereotype that techies are poor in communication? What is the future-proof discipline in IT, a mobile apps developer, network engineer, AI or cybersecurity specialist? What does it take to becoming a high-performing IT professional? Why do many technologies fail to gain a foothold in large enterprise? These are just few subjects I have in mind to share in the future. Feel free to suggest.