大概半年前,我开始接触 AI 编程助手,例如 Cursor 和 GitHub Copilot。简单试用后,感觉不太顺手,就暂时搁置了。当时我的感受是:
Cursor 的存在感太强了,用起来不太习惯,还是更喜欢 VSCode。
但随着 AIDE 和 AI 编程插件越来越多地出现在视野中,我隐隐意识到这股趋势不可忽视,应该再次尝试。于是开通了 GitHub Copilot 的付费版,开始在一个实际项目中深入使用。
使用一段时间后,我的整体感受是:AI 辅助编程将会是不可逆转的趋势。越早掌握一款得心应手的工具,越早完成与 AI 的磨合,对程序员的个人发展越有益。
我实践的项目是一个基于 Tauri 2 的 Mac App。粗略估计,其中大约一半的代码是由 AI 辅助生成的。例如,编写一个 React 组件来展示 Playlist 下的 Videos,我先是构建了页面的基本结构,然后逐步细化指令,比如为每个 Video 添加序号,设置鼠标悬停 (Hover) 样式等等。这种感觉就像雇佣了一位高效的实习生来协助完成工作。
可能是 GitHub Copilot 的能力有限,或者我的使用技巧还不够成熟,亦或是当前 AI 编程助手普遍的现状,我发现在面对一些需要「跳跃性思维」才能解决的复杂 Bug 时,这些工具能提供的帮助非常有限。比如我曾遇到一个问题:主题切换在开发模式下运行正常,但部署到生产环境并重启应用后就会失效。这个 Bug 有点隐蔽,结果 AI 也如期望那样未能提供有效的解决方案。再比如,如何将 devtools 工具带到生产环境,同时禁用默认的 Cmd+Shift+I 快捷键?AI 给出的方案都不 Work,最终还是翻了下 Tauri 的源码才找到的思路。 目前看来, AI 编程助手更擅长执行指令明确的任务,根据上下文生成代码片段,但在复杂问题分析、根因定位、以及需要创新性解决方案的场景下,仍然需要人类程序员的深度思考和经验判断。如果未来的 AI 通过更高级的算法和更完善的知识库,模拟甚至超越了「跳跃性思维」,这又会是另一个课题,不过目前看来,还不太需要担心。
「信息滞后」也是一个不可忽视的问题(Cursor 在这方面表现稍好)。Copilot 似乎缺乏有效的机制来实时获取和更新最新的技术文档,因此它生成的代码有时会使用过时的 API。在 Tauri 2 项目中这个问题就比较明显,因为 Tauri 2 的 API 与 Tauri 1 相比变化很大。我经常需要在 AI 生成的代码基础上,手动查阅最新的 API 文档并进行替换。这个问题 Cursor 解决得更好一些,但有时也会出现新旧 API 混用的情况。
基于这次 AI 辅助编程的体验,我开始认真思考:AI 辅助编程的崛起究竟意味着什么?它将如何重塑软件开发行业?
程序员这个职业会消失吗?
如果一个没有编程经验的人,通过 AI 辅助编程工具就能开发出像模像样的产品,那么还需要程序员吗?如果对软件产品的要求仅仅停留在「可用」的层面,那么程序员的价值确实会受到挑战。但在实际软件开发过程中,我们不可避免地会遇到难以调试的 Bug,或者需要突破常规的创新思路才能实现的需求。目前的 AI 工具对复杂上下文的理解还不够深入,在处理复杂问题时常常显得力不从心。如果完全放任 AI 自由编程,结果很可能是 Bug 没解决,代码反而变得更混乱,维护性更差。
在注重协作的组织环境中,程序员被 AI 工具淘汰的速度可能会相对放缓。有相关工作经验的同学应该都有体会,在日常工作中,程序员每天的代码产出量其实很有限,相当一部分时间都花费在需求沟通、Bug 修复、梳理遗留代码、编写文档、Oncall、处理依赖冲突、等待编译构建结果等非直接编程相关的工作上。直接将 AI 引入现有复杂项目,一方面可能存在代码泄露的安全风险,另一方面,AI 由于缺乏对遗留代码完整上下文的理解,可能会生成不符合项目规范或非最佳实践的代码,反而增加维护成本。毕竟最终对代码质量和线上问题负责的仍然是人,而不是 AI。如果线上出现重大事故,总不能给 AI 记个 P1 吧。因此,在注重团队协作和代码质量的大型组织中,程序员受 AI 编程工具的直接冲击可能相对较小。但这种状态能维持多久还不好说,当企业看到 AI 带来的生产力提升潜力后,肯定会围绕 AI 进行组织架构和工作流程的效率革新,届时程序员群体必然会受到影响。
AI 编程的崛起,不是程序员的末日,而是一次进化。AI 不会取代所有程序员,但它将深刻地改变程序员的角色和技能要求。
初/中级程序员如何应对日益强大的 AI 编程工具?
首先要有危机意识,拥抱变革。有了危机感,才能激发学习和改变的动力。如果仍然想从事编程行业,那么首要任务是积极学习并掌握一款 AI 编程工具(Copilot 相对轻量,可以作为入门级选择)。要努力做到与 AI 工具默契配合,充分发挥其代码生成和辅助功能,从而显著提升个人生产力,并将节省出来的时间投入到更有价值、更具挑战性的事情上。
接下来,可以认真考虑未来的职业发展方向。如果希望在编程领域持续深耕,可以着重提升高级程序员应具备的核心技能,例如系统设计、复杂业务逻辑分析、问题分解与抽象、疑难 Bug 调试与根因分析、系统性能优化、创新技术方案设计等等。在这个学习和提升的过程中,也可以借助 AI 工具来提高学习效率,例如在阅读学习优秀的开源代码时,可以利用 AI 辅助代码理解,梳理代码逻辑,解答设计思路上的疑问。
如果对产品方向更感兴趣,或者有创业的想法,则可以投入更多时间在产品设计、用户研究、市场分析等方面,利用 AI 工具快速构建 MVP原型,快速验证产品想法,探索商业模式。
AI 编程下的人机协作新范式
可以将 AI 工具使用者与 AI 编程工具之间的关系比作「交响乐团的指挥家与乐团」。程序员如同经验丰富的指挥家,拥有清晰的愿景和目标,负责软件系统的整体架构和模块设计,指导和协调 AI 工具(如同乐团)高效、高质量地完成代码编写(如同乐曲演奏)。AI 工具如同训练有素的乐团,拥有强大的代码生成和辅助能力,但需要指挥家的明确指令和专业引导才能演奏出优美的乐章。
不同的 AI 编程工具都有各自独特的功能和特点,但与它们交互的核心要点是共通的:精细化任务分解和指令设计,以及代码 Review。 在人机协作的新范式下,程序员不仅需要与 AI 工具高效沟通,还需要与团队成员更好地协作,共同利用 AI 提升团队的整体研发效率和代码质量。
半年前我初次尝试 AI 编程助手效果不佳,也是因为没有掌握与 AI 编程工具有效沟通的精髓。可以将这些 AI 工具视为响应迅速、随叫随到的「外包程序员」,每一次代码改动都相当于一次 GitHub 的 Pull Request。要确保每次改动的目标明确具体,代码改动量控制在可审查的范围内。最终目标是代码虽然不是完全由自己编写的,但对代码的整体逻辑和关键细节都非常熟悉,这样即使 AI 无法解决某些深层次的 Bug,自己也能凭借对代码的深入理解,逐步分析和定位问题根源。
小结
AI 辅助编程的崛起既是挑战,也是机遇,基本上是不可逆转的趋势。如果半年前有人这么跟我说,我可能会将信将疑,但现在对此更加确信。即使只是体验了入门级的 Copilot,也已经真切感受到了它带来的效率提升。尽早找到适合自己的 AI 编程工具,积极完成与 AI 的磨合过程,然后将效率提升所节省出的宝贵时间,投入到扩展和丰富自身技能库上,不断学习和掌握更高级的技能,让自己成为拥抱 AI 浪潮的弄潮儿,而不是被技术浪潮无情淘汰的落伍者。
No matter what kind of startup you start, it will probably be a stretch for you, the founders, to understand what users want. The only kind of software you can build without studying users is the sort for which you are the typical user.
即使你吃自己的狗粮,结果仍然可能不如意,比如需求提炼不够精确,产品不够惊艳,缺少推广途径等等。NVIDIA CEO 黄仁勋在早年的一次分享中有提到:
If you want to be successful, I would encourage you to develop a tolerance for failure. However, there’s something crucial about failure: if you fail often enough, you might actually become a failure, which is fundamentally different from being successful. So the real question becomes: how do you teach someone to fail, but fail quickly, and change course the moment they recognize a dead end?
The way to approach this is through what we call “intellectual honesty.” This means continuously assessing whether something makes sense, and if it’s the wrong decision, being willing to change your mind.
To achieve this, you have to nurture a tolerance for risk-taking and teach people how to fail—quickly and with minimal cost. Innovation demands a bit of experimentation; experimentation requires exploration, and exploration inevitably involves failure. Without a tolerance for failure, you’d never experiment; without experimentation, you’d never innovate; and without innovation, you’ll never succeed. Otherwise, you’ll just end up being, well… a dweeb.
‘If today were the last day of my life, would I want to do what I am about to do today?’ If the Answer is ‘no’ for too many days in a row, I know I need to change something.
The idea behind affirmations is that you simply write down your goals 15 times a day and somehow, as if by magic, coincidences start to build until you achieve your objective against all odds.
PocketBase: An open-source realtime backend in one file.
Deno: A modern runtime for JavaScript and TypeScript.
This architecture had been running for about 3 years. Mostly satisfied. Zola build fast, its functionality was basically adequate, with Deno, I could write API for frontend page. Using Alpine.js and Tailwind CSS, I could create frontend pages quickly.
However, it wasn’t perfect, mainly for these reasons:
My go-to frontend library is React, but it’s hard to combine React and Zola in an elegant way.
I want to handle HTTP server, Static Site Generation(SSG), and React in a unified way.
This tech stack lacked some features I wanted, such as Server Side Rendering and using Shiki as a code highlighter.
After some investigation, I decided on Astro as my new site engine.
It has all the features I wanted in a meta-framework.
I’ve seen more and more tech influencers mentioning it favorably.
However, making this transition wasn’t easy. On one hand, I was familiar with the current tools and workflow. Though it was missing some features, but it was tolerable. On the other hand, it would take time to become familiar with new tools, build a new workflow, and transfer the content.
After a period of deliberation, I decided to make the switch.
Design
The new site is separated into 3 parts: essays, comments and stream. Essays are articles I write on specific topics, stream is mainly for life activities, like books, photos and thoughts. Kind of facebook homepage, but just for myself.
Essays are written in MDX and generated by Astro. Stream contents are stored in PocketBase and rendered dynamically via Astro’s SSR. These contents are created using PocketBase’s built-in GUI. Comments are also stored in PocketBase, using PocketBase’s API.
Development
Astro’s Developer Experience is really good. It may take some time to get familiar with its directory structure, design philosophy and API. But once you get through this stage, you gain power. Thanks to the well-written docs and GitHub issues, whenever I encountered a problem, there was usually a solution.
I wanted to provide a full-content RSS feed, but MDX files can only be fully rendered within .astro context. Thanks to the Container API (experimental), I nailed it.
I struggled for half a day trying to achieve swap HTML (from server) effects with an .astro component, but it was hard to rebind elements to JS functions. I ended up just using React.
You can’t bind js functions to jsx-like syntax, because they are not real DOM elements, and will be serialized, so js functions can’t survive.
<script is:inline> can make HTML and scripts stay together, but after you replace the HTML with the new content, old scripts won’t work (cause they were bound to the old DOM).
server islands are useful for non-server-related components. They work well for lists, or client-side interactions. When server contents are involved, DOM rebinding becomes an issue.
.astro pages are like PHP, but more frontend-friendly.
.astro components are handy for MDX.
It took me 5 days (working full-time) to design, develop, and transfer content. It wasn’t easy, but as the foundation for the next decade, it was worth the effort.
Deployment
This site is served on a VPS, so I chose Node as the adapter. In theory, it’s simple to deploy: just run pnpm build and node ./dist/server/entry.mjs, then make Nginx point the domain to the port. But reality taught me a lesson. When I ran pnpm build, it took nearly 8 minutes and failed, due to running out of memory.
My first thought was that it allocated too much memory when optimizing images, so I set limitInputPixels: 10000 (100x100) in astro.config.mjs to tell Sharp not to process large images. It didn’t work.
Then I found someone mentioning a command option --experimental-static-build that might fix this scenario, but still, it didn’t work.
When I only left 1 post for Astro to build, it worked, though it still took about 5 minutes.
I was kinda desperate then, even considering rolling back to Zola, which at least wouldn’t cause these issues. This thought lingered for a while before I decided to overcome the problem.
since it could build on my machine, why not just skip the build process on the server and use my machine’s build result? So I put the dist directory in git, and on the server side, I just ran node ./dist/server/entry.mjs. Guess what? it worked! Though it was a bit quirky.
Just as I thought the mission was complete, something else happened. I wanted to write some cache files to the local disk. So I created a .env file, set the cache path, then built and pushed to the server. Server crashed. After digging into the logs, it turned out the cache path was still my machine’s path instead of server’s path defined in the .env file.
The .env file is read at compile time, not runtime. I could remedy it by replacing all the env content before running, but it would be ugly and could lead to potential bugs.
Just when I was at my wit’s end, Bun saved me. Bun is known for its speed and efficiency, and Astro supports Bun, so I decided to give it a try. I ran bun install and bunx --bun astro build. Literally just these two commands, and it worked!
There were still some minor scenarios to handle (e.g. import.meta.env is different when using Bun), but the deployment section basically came to an end. Bun is not only fast but also resource-efficient. In the future, I will prefer Bun over Node.
Conclusion
Astro as a meta-framework fully meets my requirements. PocketBase, combining data storage and CMS in a single file, is really convenient. I hope this new system can help me create more and better content in the future.
As more and more people get both information and entertainment through electronic media, researchers have also studied the extent to which vocabulary is learned through television (Peters & Webb, 2018) and through computer games (Cobb & Horst, 2011; Sundqvist & Sylvén, 2012). They have found that these resources can be effective in expanding vocabulary. More research in this emerging area should help to specify the nature and extent of vocabulary growth via exposure to electronic media. In the meantime, research has overwhelmingly confirmed that vocabulary growth will be limited if the learner does not also engage in focused vocabulary learning activities (Nation, 2001). Jan Hulstijn and Batia Laufer (2001) and others provide evidence that vocabulary development is more successful when learners are fully engaged in activities that require them to attend carefully to the new words and even to use them in productive tasks. Izabella Kojic-Sabo and Patsy Lightbown (1999) found that effort and the use of good learning strategies, such as keeping a notebook, looking words up in a dictionary, and reviewing what has been learned were associated with better vocabulary development. Cheryl Boyd Zimmerman (2009) provides many practical suggestions for teaching vocabulary and also for helping learners to continue learning outside the classroom.
Vocabulary learning occurs because certain conditions are established which facilitate learning. These are repetition, noticing, retrieval, varied encounters and varied use, and elaboration. These learning conditions are themselves underpinned by two key factors:
repetition, i.e. the number of encounters with each word.
the quality of attention at each encounter.
The greater the number of encounters, the more likely learning is to occur; and the deeper the quality of the encounters, the more likely learning is to occur.
Research suggests that there may be little difference in the efficacy of learning smaller and larger sets of words. Instead, they found that students learned most effectively when there was greater spacing between each encounter with the same words. From a practical perspective, this suggests that learning a larger number of words may be more effective than learning a smaller number, because if the words are studied in sequence, there is likely to be a greater amount of time between each word being encountered and studied.
这是 How Languages Are Learned 和 How Vocabulary is Learned 这两本书中提到的关于学习新词汇的要点。 主要是「保持一定时间间隔的重复接触」和「注意力质量」。前者如果每天抽一定时间来听和读,那些中高频的词就有可能被多次遇到;后者表示遇到新单词时要留心,回忆是否之前遇到过,可能是什么意思,或者根据上下文猜测可能的意思,结合字典验证。同时也要有意识地去 Review 新单词。
输出的目的是完成对词汇的 Retrieval(检索),使之能够在正确的场合下被使用。这个过程中如果有人能帮助指出其中的错误自然很好,但如果不能问题也不大。以下是 《How Languages Are Learned》这本书中的描述:
Research on learner beliefs about the role of grammar and corrective feedback in L2 learning confirms that there is often a mismatch between students’ and teachers’ views. In two large-scale studies, Renate Schulz (2001) found that virtually all students expressed a desire to have their errors corrected while very few teachers felt this was desirable. In addition, while most students believed that ‘formal study of the language is essential to the eventual mastery of a [foreign language]’ (p. 254), just over half of the teachers shared this belief. While this may seem surprising, it is consistent with a version of language teaching that is based on the hypothesis that L2 acquisition takes place through exposure to meaningful and comprehensible language. According to this view, explicit instruction and feedback on error play very limited roles.