Home
/
Blog
/
Hiring Strategies
/
Now In Tech: AI, Assessments, And The Great Over-Correction

Now In Tech: AI, Assessments, And The Great Over-Correction

Author
Sachin Gupta
Calendar Icon
May 19, 2023
Timer Icon
3 min read
Share

If you are craving some stability and have had enough adventures, then you are definitely not alone. Over the last 3 years, I’ve had enough of rollercoaster rides – metaphorically and literally. Seeing the post-COVID hiring frenzy descend into chaos has not been easy for me or others in the recruiting community.

With tech companies laying off more employees in Q1 ’23 (more than the entire 2022), good news has been hard to come by. For my first update in 2023 – a new endeavor I decided to take up this year – I would have liked the industry to be in a much better place than where it currently stands, but we’ll take the cards we’ve been dealt.

All of us in tech know that this downturn will change in a heartbeat as soon as the markets stabilize. What remains to be seen is if it’ll have a lasting impact on how we hire engineering talent, similar to the move to remote hiring post-pandemic.

There is a bright-eyed-bushy-tailed focus on productivity among the Valley people. Gone is the massive workforce that came to a workplace boasting three-course meals in the cafeteria and needed yoga rooms and sleeping pods to function. In many ways, tech is going back to the basics – show up, use your skills, and be productive.

Companies across the board are reviewing their team structures, looking for removable cushioning, cutting down on middle management, and wanting to do more with less. Public companies have been motivated by their stock performance and the rest are emboldened, seeing the giants lead the charge. Retrenching has been a buzzword around the valley. Tech leaders are relooking at their performance review processes and reformulating their hiring.

While some cutbacks were long overdue (for instance, Google more than doubled its workforce from 2018 to 2022), as an industry we might have been a little overzealous with this wave of austerity. Markets will bounce back, like they always do, and companies will go on a hiring spree to fuel the growth — but who we hire, and how we hire, would have changed.

Despite this recessionary period, the demand for specialized skills like data science, AI, and ML has remained stable. However, the hiring urgency we saw right after the pandemic is replaced by a muted, prolonged hiring process where the focus is not on filling roles faster but on hiring for the right skill sets. Developers reading this should be aware that the Great Over-Correction in tech companies also signals a death knell of the bargaining power they have held since COVID.

Talking of skills…

There are two important ongoing skill-related conversation starters in the tech world which have directly impacted HackerEarth. First is the emergence of ChatGPT and other generative AI and their use in candidate assessment tests.

I’m sure you’ve all seen the LinkedIn posts about how ChatGPT has been acing entrance tests and accumulating certificates. While the models are already very good at such structured tests, they will only get better with more data. Unless regulated, this may mean the end of the traditional fact/information-based testing methodology. That testing as a concept will have to evolve and completely move away from testing information retention and formula application to real problem-solving.

Generative AI, while extremely powerful, is still extremely poor at problem-solving. These models can efficiently solve well-defined problems but are incapable (at least for now) of solving real-world problems. Tech assessments are no different and there is a lot of concern around the use of ChatGPT in coding assessments in their current form. Particularly because a lot of companies have so far relied on complex algorithmic coding tasks as a measure of competency. This will get completely undermined by systems like ChatGPT.

One school of thought is that every software developer is going to use generative AI for coding in the near future, so it only makes sense to provide generative AI systems as part of the assessments to not only enable them to use it for problem-solving but in some cases to even test if they know how to effectively use a generative AI system. However, there is an alternative opinion that even though most of the code can be generated, to be a good software developer you must understand the fundamentals, and hence people should demonstrate that skill as part of the assessments. And since ChatGPT can undermine that, it comprises the assessments.

It is an ongoing debate, and as a facilitator of technical assessments, we at HackerEarth can see the argument from both ends. We have always aligned our assessment methodology to be aligned to how work gets done in real life, but we appreciate that an engineering manager would want to know how good a developer is sans AI assistance. So, instead of taking sides, we decided to support both personas.

We built a unique proctoring feature that creates a constrained environment for the test taker and hence blocks the use of not just ChatGPT, but any other support tool. Smart Browser, the new addition to our flagship Assessments product, is now live, and you can read more about it here. At the same time, we have embedded generative AI into our code editors. Like the Smart Browser, it’s an optional setting. When turned on, the test takers will be able to use generative AI right there in the test interface and answer their questions using such a system. We are also investing in questions that are better suited for situations where a developer has access to a generative AI technology while writing their code.

Okay, that’s enough about AI now!

The second skill-related conversation starter I referred to was the need to up- and re-skill tech teams. Upskilling programs have existed for a long while, and we all know how they have fared. I recently wrote a piece for Fast Company in which I went into great depth on why traditional upskilling initiatives do not work.

To recap:

– there is a significant gap in measuring ROI from current upskilling platforms

– there is a lack of social contract in these learning models, which hampers the 70% of upskilling that happens organically within a team

– these models lack an application-based learning pathway, so most of the time, course completion cannot be taken as a signifier of skill enhancement

I believe upskilling is integral to the skill-first tech ecosystem we are trying to build. Continuous learning not only helps engineers find a better pathway for career growth but also enables companies to address skill gaps and predict productivity outcomes. HackerEarth has always favored ROI-based learning pathways that do more than just help your engineers attain a certificate.

We have worked hard to create a platform that can merge real-world needs for developer upskilling with business outcomes, and we are close to completion. You can check out our Learning and Development platform here or write to me to learn more. This is a conversation I’d love to have with you!

And with that, this first quarterly update is out to print. I feel inclined to say something cliche like we’re all in the same boat and la di da, but I’m sure you all know that. Tech has lived in a unique bubble during COVID (that got built over the previous decade), but that bubble has now burst. The only way forward is through experimentation, exploration, and innovation, and that, as always, starts with who and how you hire.

Until next quarter,

Sachin

Subscribe to The HackerEarth Blog

Get expert tips, hacks, and how-tos from the world of tech recruiting to stay on top of your hiring!

Author
Sachin Gupta
Calendar Icon
May 19, 2023
Timer Icon
3 min read
Share

Hire top tech talent with our recruitment platform

Access Free Demo
Related reads

Discover more articles

Gain insights to optimize your developer recruitment process.

The Mobile Dev Hiring Landscape Just Changed

Revolutionizing Mobile Talent Hiring: The HackerEarth Advantage

The demand for mobile applications is exploding, but finding and verifying developers with proven, real-world skills is more difficult than ever. Traditional assessment methods often fall short, failing to replicate the complexities of modern mobile development.

Introducing a New Era in Mobile Assessment

At HackerEarth, we're closing this critical gap with two groundbreaking features, seamlessly integrated into our Full Stack IDE:

Article content

Now, assess mobile developers in their true native environment. Our enhanced Full Stack questions now offer full support for both Java and Kotlin, the core languages powering the Android ecosystem. This allows you to evaluate candidates on authentic, real-world app development skills, moving beyond theoretical knowledge to practical application.

Article content

Say goodbye to setup drama and tool-switching. Candidates can now build, test, and debug Android and React Native applications directly within the browser-based IDE. This seamless, in-browser experience provides a true-to-life evaluation, saving valuable time for both candidates and your hiring team.

Assess the Skills That Truly Matter

With native Android support, your assessments can now delve into a candidate's ability to write clean, efficient, and functional code in the languages professional developers use daily. Kotlin's rapid adoption makes proficiency in it a key indicator of a forward-thinking candidate ready for modern mobile development.

Breakup of Mobile development skills ~95% of mobile app dev happens through Java and Kotlin
This chart illustrates the importance of assessing proficiency in both modern (Kotlin) and established (Java) codebases.

Streamlining Your Assessment Workflow

The integrated mobile emulator fundamentally transforms the assessment process. By eliminating the friction of fragmented toolchains and complex local setups, we enable a faster, more effective evaluation and a superior candidate experience.

Old Fragmented Way vs. The New, Integrated Way
Visualize the stark difference: Our streamlined workflow removes technical hurdles, allowing candidates to focus purely on demonstrating their coding and problem-solving abilities.

Quantifiable Impact on Hiring Success

A seamless and authentic assessment environment isn't just a convenience, it's a powerful catalyst for efficiency and better hiring outcomes. By removing technical barriers, candidates can focus entirely on demonstrating their skills, leading to faster submissions and higher-quality signals for your recruiters and hiring managers.

A Better Experience for Everyone

Our new features are meticulously designed to benefit the entire hiring ecosystem:

For Recruiters & Hiring Managers:

  • Accurately assess real-world development skills.
  • Gain deeper insights into candidate proficiency.
  • Hire with greater confidence and speed.
  • Reduce candidate drop-off from technical friction.

For Candidates:

  • Enjoy a seamless, efficient assessment experience.
  • No need to switch between different tools or manage complex setups.
  • Focus purely on showcasing skills, not environment configurations.
  • Work in a powerful, professional-grade IDE.

Unlock a New Era of Mobile Talent Assessment

Stop guessing and start hiring the best mobile developers with confidence. Explore how HackerEarth can transform your tech recruiting.

Vibe Coding: Shaping the Future of Software

A New Era of Code

Vibe coding is a new method of using natural language prompts and AI tools to generate code. I have seen firsthand that this change makes software more accessible to everyone. In the past, being able to produce functional code was a strong advantage for developers. Today, when code is produced quickly through AI, the true value lies in designing, refining, and optimizing systems. Our role now goes beyond writing code; we must also ensure that our systems remain efficient and reliable.

From Machine Language to Natural Language

I recall the early days when every line of code was written manually. We progressed from machine language to high-level programming, and now we are beginning to interact with our tools using natural language. This development does not only increase speed but also changes how we approach problem solving. Product managers can now create working demos in hours instead of weeks, and founders have a clearer way of pitching their ideas with functional prototypes. It is important for us to rethink our role as developers and focus on architecture and system design rather than simply on typing c

The Promise and the Pitfalls

I have experienced both sides of vibe coding. In cases where the goal was to build a quick prototype or a simple internal tool, AI-generated code provided impressive results. Teams have been able to test new ideas and validate concepts much faster. However, when it comes to more complex systems that require careful planning and attention to detail, the output from AI can be problematic. I have seen situations where AI produces large volumes of code that become difficult to manage without significant human intervention.

AI-powered coding tools like GitHub Copilot and AWS’s Q Developer have demonstrated significant productivity gains. For instance, at the National Australia Bank, it’s reported that half of the production code is generated by Q Developer, allowing developers to focus on higher-level problem-solving . Similarly, platforms like Lovable or Hostinger Horizons enable non-coders to build viable tech businesses using natural language prompts, contributing to a shift where AI-generated code reduces the need for large engineering teams. However, there are challenges. AI-generated code can sometimes be verbose or lack the architectural discipline required for complex systems. While AI can rapidly produce prototypes or simple utilities, building large-scale systems still necessitates experienced engineers to refine and optimize the code.​

The Economic Impact

The democratization of code generation is altering the economic landscape of software development. As AI tools become more prevalent, the value of average coding skills may diminish, potentially affecting salaries for entry-level positions. Conversely, developers who excel in system design, architecture, and optimization are likely to see increased demand and compensation.​
Seizing the Opportunity

Vibe coding is most beneficial in areas such as rapid prototyping and building simple applications or internal tools. It frees up valuable time that we can then invest in higher-level tasks such as system architecture, security, and user experience. When used in the right context, AI becomes a helpful partner that accelerates the development process without replacing the need for skilled engineers.

This is revolutionizing our craft, much like the shift from machine language to assembly to high-level languages did in the past. AI can churn out code at lightning speed, but remember, “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” Use AI for rapid prototyping, but it’s your expertise that transforms raw output into robust, scalable software. By honing our skills in design and architecture, we ensure our work remains impactful and enduring. Let’s continue to learn, adapt, and build software that stands the test of time.​

Ready to streamline your recruitment process? Get a free demo to explore cutting-edge solutions and resources for your hiring needs.

Guide to Conducting Successful System Design Interviews in 2025

What is Systems Design?

Systems Design is an all encompassing term which encapsulates both frontend and backend components harmonized to define the overall architecture of a product.

Designing robust and scalable systems requires a deep understanding of application, architecture and their underlying components like networks, data, interfaces and modules.

Systems Design, in its essence, is a blueprint of how software and applications should work to meet specific goals. The multi-dimensional nature of this discipline makes it open-ended – as there is no single one-size-fits-all solution to a system design problem.

What is a System Design Interview?

Conducting a System Design interview requires recruiters to take an unconventional approach and look beyond right or wrong answers. Recruiters should aim for evaluating a candidate’s ‘systemic thinking’ skills across three key aspects:

How they navigate technical complexity and navigate uncertainty
How they meet expectations of scale, security and speed
How they focus on the bigger picture without losing sight of details

This assessment of the end-to-end thought process and a holistic approach to problem-solving is what the interview should focus on.

What are some common topics for a System Design Interview

System design interview questions are free-form and exploratory in nature where there is no right or best answer to a specific problem statement. Here are some common questions:

How would you approach the design of a social media app or video app?

What are some ways to design a search engine or a ticketing system?

How would you design an API for a payment gateway?

What are some trade-offs and constraints you will consider while designing systems?

What is your rationale for taking a particular approach to problem solving?

Usually, interviewers base the questions depending on the organization, its goals, key competitors and a candidate’s experience level.

For senior roles, the questions tend to focus on assessing the computational thinking, decision making and reasoning ability of a candidate. For entry level job interviews, the questions are designed to test the hard skills required for building a system architecture.

The Difference between a System Design Interview and a Coding Interview

If a coding interview is like a map that takes you from point A to Z – a systems design interview is like a compass which gives you a sense of the right direction.

Here are three key difference between the two:

Coding challenges follow a linear interviewing experience i.e. candidates are given a problem and interaction with recruiters is limited. System design interviews are more lateral and conversational, requiring active participation from interviewers.

Coding interviews or challenges focus on evaluating the technical acumen of a candidate whereas systems design interviews are oriented to assess problem solving and interpersonal skills.

Coding interviews are based on a right/wrong approach with ideal answers to problem statements while a systems design interview focuses on assessing the thought process and the ability to reason from first principles.

How to Conduct an Effective System Design Interview

One common mistake recruiters make is that they approach a system design interview with the expectations and preparation of a typical coding interview.
Here is a four step framework technical recruiters can follow to ensure a seamless and productive interview experience:

Step 1: Understand the subject at hand

  • Develop an understanding of basics of system design and architecture
  • Familiarize yourself with commonly asked systems design interview questions
  • Read about system design case studies for popular applications
  • Structure the questions and problems by increasing magnitude of difficulty

Step 2: Prepare for the interview

  • Plan the extent of the topics and scope of discussion in advance
  • Clearly define the evaluation criteria and communicate expectations
  • Quantify constraints, inputs, boundaries and assumptions
  • Establish the broader context and a detailed scope of the exercise

Step 3: Stay actively involved

  • Ask follow-up questions to challenge a solution
  • Probe candidates to gauge real-time logical reasoning skills
  • Make it a conversation and take notes of important pointers and outcomes
  • Guide candidates with hints and suggestions to steer them in the right direction

Step 4: Be a collaborator

  • Encourage candidates to explore and consider alternative solutions
  • Work with the candidate to drill the problem into smaller tasks
  • Provide context and supporting details to help candidates stay on track
  • Ask follow-up questions to learn about the candidate’s experience

Technical recruiters and hiring managers should aim for providing an environment of positive reinforcement, actionable feedback and encouragement to candidates.

Evaluation Rubric for Candidates

Facilitate Successful System Design Interview Experiences with FaceCode

FaceCode, HackerEarth’s intuitive and secure platform, empowers recruiters to conduct system design interviews in a live coding environment with HD video chat.

FaceCode comes with an interactive diagram board which makes it easier for interviewers to assess the design thinking skills and conduct communication assessments using a built-in library of diagram based questions.

With FaceCode, you can combine your feedback points with AI-powered insights to generate accurate, data-driven assessment reports in a breeze. Plus, you can access interview recordings and transcripts anytime to recall and trace back the interview experience.

Learn how FaceCode can help you conduct system design interviews and boost your hiring efficiency.

Top Products

Explore HackerEarth’s top products for Hiring & Innovation

Discover powerful tools designed to streamline hiring, assess talent efficiently, and run seamless hackathons. Explore HackerEarth’s top products that help businesses innovate and grow.
Frame
Hackathons
Engage global developers through innovation
Arrow
Frame 2
Assessments
AI-driven advanced coding assessments
Arrow
Frame 3
FaceCode
Real-time code editor for effective coding interviews
Arrow
Frame 4
L & D
Tailored learning paths for continuous assessments
Arrow
Get A Free Demo