Struggle Is the Teacher for Continuous Improvement

We all dream of a life that flows smoothly, where goals are met on time, habits stick effortlessly, and everything unfolds exactly as we imagined. But life doesn’t work that way. And perhaps that’s its greatest gift.

There’s something deeply human about falling short. You set out to read a book a week. You promise yourself to exercise daily. The intention is pure, the motivation is real. But days pass. Work piles up. Distractions win. You skip a workout. You miss a chapter. And suddenly, you're left wondering, "Why can’t I stick to what I planned?"

It’s easy to feel defeated. But here’s the truth: those moments of falling short aren’t failures. They are invitations. Each missed goal and undone task is a chance to pause, reflect, and understand yourself better. It's in those very moments that growth begins.

I’ve made countless promises to myself. Read more. Move more. Learn more. Sometimes I keep them. Often I don’t. But over time, I’ve learned not to judge myself for it. Instead, I ask deeper questions. What’s getting in the way? What patterns keep showing up? What can I change tomorrow? These questions have taught me more than any perfectly executed plan ever could.

This journey of trying, stumbling, learning, and trying again is what makes life so incredibly rich. It's not about perfection. It's about progress. Behavioral science reminds us that our habits and actions are shaped by environments, emotions, and countless unseen factors. Understanding this gives us power. It shifts the narrative from "I failed" to "I’m figuring it out."

And isn’t that the whole point? We’re all works in progress. Every time you show up, even imperfectly, you are growing. Every time you reflect instead of retreat, you build resilience. Every time you restart, you prove to yourself that you haven’t given up.

So if you’ve fallen behind on your goals, whether it’s reading more books, exercising daily, starting a new project, or simply getting through the week, don’t beat yourself up. Celebrate the fact that you care enough to try. That is courage. That is strength.

Life isn't beautiful because it’s predictable. It’s beautiful because it challenges us, reveals us, and gives us endless opportunities to rise. Let your unmet goals be the beginning, not the end. Let your setbacks fuel your next step. And above all, trust that every imperfect effort is still moving you forward.

If you take just one thing from this post, let it be this. You don’t have to be perfect to be proud of yourself. Keep showing up. Keep learning. Keep becoming the person you’re meant to be.

Thank you for being part of this journey. We are all in it together.

奮鬥是持續成長最好的老師

我們都嚮往一個順利無阻的生活,目標準時達成、習慣輕鬆養成,一切如想像般完美展開。但人生並不是這樣運作的,而這或許正是它最美好的地方。

未達預期,其實是一種極為真實的人類經驗。你立志每週讀一本書,承諾自己每天運動。初衷真誠,動機充足。但日子一天天過去,工作堆積如山,注意力被拉走,你錯過了一次運動,也跳過了一章讀書內容,然後開始自問:「為什麼我總是做不到我所計劃的?」

這時我們很容易感到挫敗。但事實是:那些未竟的時刻並非失敗,而是邀請。每一個沒完成的目標和被擱置的任務,都是一次暫停、反思和深入了解自己的契機。而也正是在這些時刻,成長悄悄地開始了。

我曾對自己許下無數承諾。多閱讀、多運動、多學習。有時候我做到了,但更多時候沒有。然而,隨著時間過去,我學會了不再責備自己,而是改問更深入的問題:是什麼卡住了我?有哪些模式一再出現?明天我能改變些什麼?這些提問教會我的,比任何完美執行的計畫還要多。

這段不斷嘗試、跌倒、學習、再出發的旅程,正是讓人生如此豐富的原因。它不是關於完美,而是關於進步。行為科學提醒我們,習慣和行動受環境、情緒及無數潛在因素所影響。了解這一點,就等於掌握了改變的力量。這會讓我們從「我失敗了」的自責,轉向「我正在搞懂自己」的自我成長。

這不就是人生的意義嗎?我們都是尚未完成的作品。每一次即使不完美地出現,你都在進步;每一次選擇反思而非退縮,你都在建立韌性;每一次重啟,你都向自己證明你沒有放棄。

所以如果你落後了,不論是讀書目標、每日運動、新計劃的開展,或只是撐過這一週,請不要責怪自己。慶幸你願意嘗試,這就是勇氣,這就是力量。

人生之所以美,不在於它的可預測性,而是它挑戰我們、揭示我們,也不斷給予我們站起來的機會。讓那些未完成的目標成為新的起點,不是終點。讓你的挫敗點燃下一步的動力。最重要的是,相信每一份不完美的努力,都在推動你向前。

如果你能從這篇文章中帶走一個訊息,那就是:你不需要完美,也可以為自己感到驕傲。繼續出現,繼續學習,繼續成為那個你注定要成為的人。

謝謝你成為這段旅程的一部分。我們一起努力。

Essential Concepts in Generative AI

As a solution architect exploring the fascinating world of Generative AI and Large Language Models (LLMs), I've come across a range of technical terms that can feel overwhelming at first. This blog post breaks them down into simple, digestible explanations for non-technical readers who are curious about how modern AI works.

Instead of splitting text into full words, subword tokenization breaks words into smaller units. This helps AI models understand rare or new words by combining familiar smaller parts. Unicode normalization ensures consistent formatting by resolving multiple representations of the same character, like accented letters, so that the model doesn't get confused by different encodings. Chunking breaks large documents into smaller, manageable pieces to make it easier for AI systems to retrieve and process relevant parts.

Since transformer models process words in parallel, positional encoding is used to tell the model the order of words in a sentence. Cosine similarity helps measure how similar two pieces of text are by comparing the direction of their meaning vectors, which is useful when checking if content is contextually related.

An encoder model processes input (like a sentence) into a form that AI can understand and work with. Multi-head attention allows the model to look at different parts of the sentence simultaneously, capturing relationships between words no matter their position. This is facilitated through Q (Query), K (Key), and V (Value) matrices, which help the model match what's being asked with relevant information and content.

Layer normalization keeps the training process stable by making sure the values going through each layer of the network remain balanced. Activation functions like ReLU and Sigmoid determine how a model processes informationReLU is efficient and suitable for deep networks, while sigmoid is often used when outputs need to represent probabilities. However, deep networks can suffer from the vanishing gradient problem, where early layers stop learning due to increasingly small gradient values. Quantization helps improve efficiency by shrinking model size using smaller numerical formats, making AI models faster and more memory-efficient. Similarly, memory pooling techniques allow the system to reuse memory space efficiently during training and inference.

To improve learning efficiency, few-shot learning enables models to generalize from just a few examples instead of needing thousands. Transfer learning helps models get a head start by reusing knowledge from a previously trained model, saving time and resources. Chain-of-thought prompting improves reasoning by encouraging the model to walk through its logic step-by-step, much like how humans solve problems.

Evaluation is vital to ensure models perform well. Stratified k-fold cross-validation ensures each test set has a fair representation of different categories. A paired t-test statistically compares two model versions to see if there's a meaningful performance difference. BLEU scores evaluate how close a machine translation is to a human translation by comparing overlapping sentence fragments, while ROUGE scores are used to assess the quality of text summaries by measuring overlap with reference summaries.

To improve performance and efficiency, dynamic batching enables systems to group and process tasks more flexibly, adjusting batch sizes based on current demand. GPU-accelerated columnar data processing with zero-copy memory access leverages powerful graphics hardware to process large datasets rapidly, avoiding the overhead of unnecessary memory transfers.

Diffusion models rely on two processes: forward diffusion adds noise to data over time, while reverse diffusion removes that noise to generate realistic outputs like images or text. Retrieval-Augmented Generation (RAG) enhances AI responses by searching a knowledge base for relevant information before generating an answer, improving factual accuracy.

Lastly, for computer vision applications, image transformation techniques such as flipping, rotation, and zooming help AI models learn to recognize objects from various angles and contexts, boosting generalization performance.

Understanding these concepts is a big step toward grasping how today's AI systems work. As we continue exploring the world of Generative AI, these building blocks help us create smarter, faster, and more reliable applications.

生成式人工智慧的基本概念

作為一位解決方案架構師,我正在探索生成式人工智慧(Generative AI)與大型語言模型(LLMs)這個迷人的領域,期間接觸了許多一開始讓人難以理解的技術術語。這篇部落格將這些概念拆解為簡單易懂的解釋,幫助對現代 AI 感興趣的非技術讀者理解它是如何運作的。

傳統上,文字會被分割成整個詞彙,但「子詞分詞」(Subword Tokenization)則是將單詞拆解成更小的單位,幫助 AI 模型理解罕見或新出現的單字,因為這些字可以由熟悉的部分組合而成。「Unicode 正規化」則是將文字轉換成一致的編碼格式,解決例如重音符號的不同表示方式,避免模型因編碼不一致而混淆。透過「分塊處理」(Chunking),大型文件可以被切割為較小的段落,方便 AI 系統快速檢索並處理相關內容。

由於 Transformer 模型是以平行方式處理文字,因此必須透過「位置編碼」(Positional Encoding)告訴模型各個詞語在句子中的順序。「餘弦相似度」(Cosine Similarity)則是用來衡量兩段文字的語義相似程度,透過比較其向量方向來評估內容是否相關。

「編碼器模型」(Encoder Model)會將輸入(如一段句子)轉換為模型可以理解和處理的形式。「多頭注意力機制」(Multi-Head Attention)讓模型可以同時關注句子的不同部分,捕捉詞語之間的位置無關關係。這過程依賴 Q(Query)、K(Key)與 V(Value)矩陣,協助模型將問題與相關資訊對應起來並擷取合適的內容。

「層正規化」(Layer Normalization)則是確保每一層神經網路在訓練時的數值穩定,避免失控。「啟動函數」(Activation Functions)如 ReLU 與 Sigmoid 用於控制訊號的流動,ReLU 高效且適合深層網路,而 Sigmoid 常用於需要輸出機率的情境。然而,深層網路可能會遭遇「梯度消失問題」(Vanishing Gradient Problem),即早期層學不到東西,因為梯度變得非常微小。「量化」(Quantization)透過使用較小的數字表示方式來縮減模型大小,提高效率並節省記憶體空間。「記憶池化技術」(Memory Pooling)則能在訓練與推理過程中重複利用記憶體空間,進一步提升效能。

為了提高學習效率,「少量學習」(Few-Shot Learning)讓模型只需少量示例即可泛化,而無需大量資料。「遷移學習」(Transfer Learning)則是讓模型利用已學會的知識快速切入新任務,節省時間與資源。「思路鏈引導」(Chain-of-Thought Prompting)則鼓勵模型像人類一樣一步步思考與推理,提升解題能力。

為了驗證模型表現,「分層 K 摺交叉驗證」(Stratified k-Fold Cross-Validation)確保測試資料中各類別比例公平。「成對 t 檢定」(Paired t-Test)則能用統計方法比較兩個模型的表現是否具有顯著差異。「BLEU 分數」被用於機器翻譯任務中,評估 AI 產出的翻譯與人類翻譯有多接近;而「ROUGE 分數」則用於摘要任務,透過比較與原文的重疊率來評估品質。

為了提升執行效率,「動態批次處理」(Dynamic Batching)允許系統依據當下負載靈活地調整一次處理的任務數量。「GPU 加速的欄式資料處理與零拷貝記憶體存取」讓系統能快速處理大型資料集,並避免不必要的記憶體轉移開銷。

「擴散模型」(Diffusion Models)透過兩個步驟生成內容:前向擴散逐步加入雜訊;反向擴散則將這些雜訊一點一滴去除,產生出逼真的圖像或文字。「檢索增強生成」(Retrieval-Augmented Generation, RAG)則是在 AI 回答前先檢索知識庫中的相關資訊,讓回答更加正確與可靠。

最後,在電腦視覺應用中,「影像轉換技術」(如翻轉、旋轉與縮放)能幫助模型學會從不同角度辨識物體,提升其泛化能力。

理解這些概念是掌握現代 AI 系統運作方式的重要一步。隨著我們持續探索生成式 AI 的世界,這些基礎知識將幫助我們打造更聰明、更快速、更可靠的應用。

What Are AI Agents? How They Work and Why They’re the Future of Artificial Intelligence

From fire to flight, the human journey has always been about inventing tools to extend our reach. In the age of artificial intelligence, we are now building something more powerful than any tool before: intelligent agents that can act, reason, adapt, and learn. Welcome to the era of AI Agents.

Human evolution gave rise to a brain capable of reasoning, sensing the environment, planning actions, and learning from experience. In parallel, AI has evolved from classic machine learning, which focused on pattern recognition, to large language models (LLMs) that understand and generate human language. From there, we moved to generalist models that handle diverse tasks, and now to reasoning engines capable of planning and solving complex problems. However, even the most advanced models are still limited. They lack real-world interaction, memory, and autonomy. That is where AI agents come in.

An AI Agent is a system that combines the powerful reasoning of LLMs with the ability to perceive, plan, act, and learn in a loop. It is designed to operate in dynamic environments with a specific goal in mind and to pursue that goal autonomously, without constant human guidance. In simple terms, if an LLM is the brain, then an agent is the brain along with a body, senses, and limbs.

To understand how agents work, think of them as having three parts. First is the model, which serves as the brain, responsible for processing input, applying reasoning, and planning. Examples include Google’s Gemini or Meta’s Gemma. Second are the tools, which act as limbs and senses, allowing agents to interact with the world. These might include search engines, APIs, or databases. Lastly, the orchestration layer functions as the nervous system and coordinates how the agent observes the world, thinks, decides, and acts repeatedly until the goal is achieved. Frameworks like ADK, CrewAI, and LangGraph support this loop.

Why do we need agents if we already have powerful models? Because models are confined to their training data and perform single-shot inferences. They lack memory, tool access, and decision-making continuity. Agents overcome these limitations. They extend knowledge through tools, manage context across sessions, integrate with real-world functions, and carry out complex multi-step plans. In essence, they bring cognition and action together.

A helpful way to visualize this is through a kitchen metaphor. Imagine a chef taking a customer order, checking the pantry, planning the dish, cooking step by step, and adjusting along the way. That is exactly how an AI agent works: gathering information, planning internally, acting with tools, and refining actions based on feedback, all in pursuit of a defined goal.

Agents are useful in any domain where complex, multi-step problem-solving, interaction with real-world systems, and dynamic adaptation are required. This includes scientific research, market analysis, customer service, personalized learning, wellness guidance, data entry, and automated report generation. On the other hand, agents may not be appropriate for high-risk workloads where deterministic outcomes are safer, or for tasks that are better served by simpler rule-based or traditional machine learning models.

That said, deploying agents comes with challenges. These include managing costs and token usage, designing and maintaining useful tools, enabling secure agent-to-agent interactions, evaluating agent performance, and ensuring transparency through observability and tracing. Security and deployment at scale are also major concerns. Moreover, agents exist on a spectrum from fully autonomous to human-in-the-loop, and the right level of control depends on the task.

So, to agent or not to agent? If your workflow involves unpredictable inputs, complex decisions, or sequential logic that benefits from reasoning and adaptability, then agents are not just the future. They are already the present. We are no longer just building smarter models. We are building intelligent systems that think, act, and improve.

什麼是 AI Agents?它們如何運作,以及為什麼是人工智慧的未來

從火的發現到飛行的實現,人類歷程一直是為了發明工具來擴展自身能力。在人工智慧的時代,我們正打造比過去任何工具更強大的東西:能夠行動、推理、適應與學習的智慧代理。歡迎來到 AI Agents 的時代。

人類演化出了能夠推理、感知環境、規劃行動並從經驗中學習的大腦。與此同時,人工智慧也從早期著重於模式識別的傳統機器學習,演進到能理解並生成自然語言的大型語言模型(LLMs)。接著,我們發展出能處理多項任務的通用型模型,如今更進一步邁向能夠規劃與解決複雜問題的推理引擎。然而,即使是最先進的模型也仍有所限制,它們缺乏與真實世界互動的能力、記憶以及自主性。而這正是 AI Agents 出現的契機。

AI Agent 是一種系統,結合了大型語言模型強大的推理能力,以及感知、規劃、行動與學習的能力,形成一個不斷循環的運作流程。它被設計來在動態環境中達成特定目標,並能在沒有持續人為干預的情況下自主運作。簡而言之,如果 LLM 是大腦,那麼 Agent 就是擁有大腦、身體、感官與四肢的完整智慧實體。

為了了解 Agent 如何運作,我們可以將它分為三個部分。第一是模型,扮演大腦的角色,負責處理輸入、推理與規劃。例如 Google 的 Gemini 或 Meta 的 Gemma。第二是工具,扮演感官與四肢,使 Agent 能與外部世界互動,例如搜尋引擎、API 或資料庫。第三是協調層,就像神經系統一樣,負責協調 Agent 如何觀察世界、思考、做決策並持續採取行動,直到達成目標。像 ADK、CrewAI 和 LangGraph 等框架支援這個運作循環。

既然已有強大的模型,為何還需要 Agent?因為模型受到訓練資料的限制,只能進行單次推理,缺乏記憶、工具存取權與決策的連貫性。Agent 則克服了這些限制。它們能透過工具擴充知識、保留上下文記憶、整合真實世界的功能,並執行複雜的多步驟計劃。也就是說,它們將認知與行動整合在一起。

我們可以用廚房作為比喻來幫助理解。想像一位廚師接到顧客點單,查看食材,規劃菜餚,然後依序進行料理並根據情況作出調整。AI Agent 的運作方式就是如此:收集資訊、進行內部規劃、運用工具採取行動,並根據回饋進行調整,最終達成設定的目標。

在任何需要複雜、多步驟問題解決、真實世界互動與動態適應的領域中,Agent 都能發揮作用。這包括科學研究、市場分析、客服服務、個人化學習、健康建議、資料輸入與報告自動化等。相對地,在某些高風險或需確保可預測結果的任務中,Agent 可能不適用;這類任務可能更適合使用簡單的規則或傳統機器學習模型。

當然,部署 Agent 也面臨一些挑戰,例如成本與 token 使用優化、設計並維護合適工具、建立安全的 Agent 互動機制、Agent 效能評估、觀察與追蹤、部署擴展性與安全性等問題。此外,Agent 的自主程度有光譜之分,從完全自主到人類在迴圈中參與(Human-in-the-loop),應視任務需求而定。

那麼,我們是否該使用 Agent?如果你的工作流程包含不可預測的輸入、複雜的決策,或是需要邏輯推理與彈性適應的步驟,那麼 Agent 並不只是未來,它們已是現在。我們不再只是建立更聰明的模型,而是在打造能思考、行動並持續進化的智慧系統。

How to Stay Motivated and Get Things Done

We all have dreams, goals, or even just simple things we wish we had time for. Yet so often, life gets in the way. We tell ourselves we're too busy, too tired, or too late. But the truth is, most of what holds us back isn’t time, it’s mindset. The difference between those who start and those who stay stuck often comes down to just two simple principles.

The first is this: you must fall in love with what you do. Passion isn’t a nice-to-have; it’s fuel. When you love something, it doesn’t feel like an obligation. It pulls you in. It wakes you up. It keeps you coming back, even on the hardest days. Like a gambler drawn to the table, you keep showing up not because you’re forced to, but because you can’t imagine not doing it. The best part? Unlike gambling, your odds aren’t left to luck. When you choose a craft, a mission, or a project that lights you up and you’re willing to invest the time to master it, you begin to shape your own success. You won’t need someone else to motivate you. Even in the busiest seasons of life, you’ll find yourself making time, not excuses.

The second principle is just as powerful: change the way you frame your challenges. Stop asking, “How could I possibly have time?” Start asking, “When will I make time?” This subtle shift turns a closed door into a crack of possibility. It stops the spiral of self-doubt before it begins. Too many people, especially those who are capable, smart, and full of potential, talk themselves out of trying before they even begin. They list out every obstacle, every reason it won’t work, every fear disguised as logic. They let their inner critic dominate, when what they really need is their inner champion to step up.

Worse yet, when problems arise, many rush to seek opinions from others, people who don’t understand the context or share the vision. In the noise of outside voices, they lose touch with their own. And so days turn into weeks, weeks into years, and the dream remains untouched.

But here’s the truth: the biggest battles are rarely external. They’re internal. People don’t fail because the world beats them down. They fail because they surrender to doubt, delay, and distraction. The moment you think, “Maybe I can,” act on it. That thought is a spark. Don’t let it die. Begin, even if it’s messy. Stay flexible. Obstacles will come. The road will twist. Sometimes you’ll have to make a dozen turns before you see the destination. But every step forward is a win.

And as you walk this path, remember this: there is immense joy in creating. There is deep fulfillment in solving problems. These are not chores. They are gifts. They remind us that we are alive. That we have power. That we can shape our world, even if it’s just a little at a time.

So don’t wait. Not for perfect timing. Not for permission. Not for everything to make sense. If something matters to you, start now. Ask not if you can, but when you will. And when the road gets hard, as it always does, take the next step anyway.

Because that is how you move forward. That is how you grow. And that is how you come alive.

如何保持動力並完成目標

我們每個人都有夢想、目標,甚至只是一些希望能抽出時間去做的小事。然而,生活總是不斷打斷我們的節奏。我們對自己說太忙了、太累了、太晚了。但事實是,真正阻礙我們的,往往不是時間,而是心態。那些能夠開始行動與那些被困在原地的人,往往只差兩個簡單的原則。

第一,你必須愛上你所做的事。熱情不是可有可無的,而是你前進的燃料。當你熱愛一件事,它就不會變成負擔,而會自然吸引你。它讓你清晨起床,讓你在困難時依然堅持下去。就像賭徒被吸引到賭桌一樣,你會不斷出現,不是因為被逼,而是因為你無法想像不做這件事的生活。最棒的是,這不像賭博,你的結果不靠運氣。當你選擇一項讓你心動的技能、使命或專案,並願意投入時間去精進它,你就開始掌握自己的成功。你不再需要別人來激勵你,即使在生活最忙碌的時候,你也會主動找時間,而不是找藉口。

第二個原則同樣強大:改變你看待挑戰的方式。不要再問:「我怎麼可能有時間?」而要問:「我什麼時候可以安排時間?」這個微妙的轉變,會讓原本緊閉的大門出現一絲可能性。它能阻止自我懷疑的漩渦在心中蔓延。太多人,尤其是那些有能力、有智慧、有潛力的人,常在行動之前就打了退堂鼓。他們列出所有困難、所有不會成功的理由,把恐懼包裝成理性。讓內心的批評聲音主導了一切,卻忽略了自己真正需要的是那個勇敢站出來的內在勇者。

更糟的是,當問題出現時,許多人會急著去問他人的意見,那些其實不了解情況、也不共享你願景的人。在外界的聲音中,他們漸漸迷失了自己的聲音。日子一天天流逝,夢想卻依然原地不動。

但事實是:人生最大的戰役,往往不是來自外界,而是內心。人們之所以失敗,往往不是因為世界擊垮了他們,而是他們自己向懷疑、拖延與分心投降了。當你心中浮現「或許我可以」這個念頭時,就是你該行動的時候了。那是一道火花,不要讓它熄滅。即使不完美,也要開始。保持彈性。困難一定會來,路也會彎曲。有時你可能要繞很多圈,才能看到目的地。但每前進的一步,都是勝利。

在這條路上,請記得:創造本身就是一種喜悅,解決問題也是一種滿足。這些不是負擔,而是禮物。它們提醒我們,我們還活著、我們有力量、我們可以改變這個世界,即使只是改變一小部分。

所以不要再等。別等完美的時機、別等別人的許可、也別等一切都理所當然才出發。如果這件事對你重要,就從現在開始。不要問自己「能不能」,而是問「什麼時候」。當道路變得艱難時,就像它總是會的那樣,還是要邁出下一步。

因為,這才是前進的方式。這才是成長的方式。這,才是真正活著的樣子。

Why Technical Talent Struggles in Bureaucratic Systems

In many countries, people believe technology is the key to progress. But in some places, especially where bureaucracy runs deep, technology tends to decline instead of thrive. This is not because there is a lack of talent, but because the system does not recognize or reward those who build, fix, and innovate.

Technical professionals such as engineers, developers, and craftsmen often face an unfair deal. They carry heavier responsibilities, solve harder problems, and continuously sharpen their skills, yet they receive the same pay and recognition as administrative staff who might not face the same level of challenge or pressure. Over time, this imbalance wears people down. Some ask themselves why they should work so hard when others do less and still get ahead. Others consider switching careers just to be seen and heard.

I experienced this personally while working as a software engineer in Hong Kong. I loved solving problems and building reliable systems. I took pride in writing clean code and making things work better for users. But no matter how much value I delivered, the recognition usually went to someone else, often a manager who didn’t write a single line of code. Promotions and raises were given based on how well you managed meetings or headcounts, not how much you contributed to the product or the customer experience. At one point, I realized that if I wanted to move up, I had to stop being an engineer and become a manager. That realization broke my heart. It made me question whether we truly value builders at all.

This problem is not new. In Chinese history, many people who had real skills started out as soldiers, merchants, or craftsmen but eventually became officials because that was the only path to respect and a decent life. Those who stayed in technical roles were treated like common laborers, working harder for less. Over time, this led to poor craftsmanship, lower quality, and a culture of cutting corners. When talent is not respected, excellence cannot survive.

In contrast, countries like Germany and Japan built cultures that honor craftsmanship. Skilled workers are respected, protected, and given space to master their trade. A watchmaker in Germany or a swordsmith in Japan could earn deep respect, not just for what they made, but for how they made it. They didn’t need to become managers to be valued. They were valued because they pursued excellence with discipline, pride, and care.

This difference in mindset still matters today. Countries that respect their builders lead in manufacturing, engineering, and innovation. Countries that ignore them fall behind and become dependent on others for progress.

If we want to build a better future, we need to stop telling our best minds that management is the only path to success. We need to reward people who create, who fix, who design, and who improve. We need to create cultures where technical excellence is celebrated, not overlooked. Let engineers stay engineers and still rise. Let craftsmen take pride in their work and still thrive. Let builders be heroes in their own right.

Because when we lift up our builders, we all rise together. A society that values its creators will never stop growing, and a team that believes in its talent can achieve the impossible.

Let us be the ones who build that culture. Let us lead by example. Let us make room for the makers, the doers, and the dreamers who turn ideas into reality.

The future belongs to those who build it.

為什麼技術人才在官僚體制中難以被重視

在許多國家,人們相信科技是推動進步的關鍵。但在某些地方,尤其是官僚體制根深蒂固的地方,科技不但無法蓬勃發展,反而會逐漸退化。這並不是因為缺乏人才,而是因為體制無法認可或獎勵那些真正動手創造、修復與創新的技術人員。

工程師、開發者、技工等技術專業人士經常面臨不公平的處境。他們肩負更重的責任、解決更複雜的問題,並持續提升自己的技能,卻只能獲得與行政人員相同的待遇與認可,而後者往往面對的挑戰和壓力遠遠較少。隨著時間過去,這種不平衡讓人感到心力交瘁。有些人會質疑,為什麼我要這麼辛苦,而其他人付出更少卻能更快升遷?也有些人開始考慮轉行,只為了能被看見、被重視。

我自己就有這樣的經歷,當時我在香港擔任軟體工程師。我熱愛解決問題,也喜歡打造穩定可靠的系統。我為自己寫出乾淨、可維護的程式碼感到自豪,也樂於讓使用者有更好的體驗。但不管我貢獻了多少價值,最終獲得認可的往往是別人,通常是那些從未寫過一行程式碼的管理者。升遷與加薪的標準,不是根據你為產品或客戶帶來了多少實際成果,而是看你主持了多少會議、帶了多少人。後來我明白,如果我想往上走,就必須放下工程師的身分,轉做管理職。這個體悟讓我非常難過,也讓我開始質疑,我們是否真的有在尊重那些真正「動手做事」的人。

其實,這樣的問題並不新。在中國歷史中,很多有真正技能的人,起初是士兵、商人或工匠,但最後不得不轉為官員,因為那是唯一一條能獲得尊重和穩定生活的道路。那些堅持技術工作的人,則被當成普通勞工看待,付出更多,得到更少。長久下來,這導致工藝水準下降、品質變差,甚至出現偷工減料的文化。當社會不尊重技術人才,卓越就無法存在。

反觀德國和日本這些國家,則建立起尊重工藝的文化。技術工作者在那裡受到尊重和保護,也有空間不斷磨練自己的技藝。在德國,一位製錶師,在日本,一位鑄劍師,都可能因其技術與專業而受到高度敬仰。他們不需要成為管理者才能被重視。他們之所以受到重視,是因為他們追求卓越、堅持紀律、以熱忱與細心完成每一件作品。

這種思維上的差異,直到今天依然深具影響力。那些重視技術人才的國家,在製造、工程與創新方面始終領先。相反地,忽視技術人才的國家,最終只能依賴他人提供進步的力量。

如果我們真的想打造一個更美好的未來,就不能再告訴我們最優秀的人才:「只有做管理才能成功」。我們應該獎勵那些創造、修復、設計與改善的人。我們要建立一種文化,讓技術上的卓越受到讚賞,而不是被忽略。讓工程師繼續當工程師,也能有所成就。讓工匠為自己的作品感到驕傲,也能活得精彩。讓建設者成為真正的英雄。

當我們一起提升那些默默做事的人,我們整個社會也會一起進步。一個尊重創造者的社會,永遠不會停止成長;一個相信人才的團隊,可以達成看似不可能的目標。

讓我們成為那種文化的開創者。讓我們以身作則。讓我們為那些真正把想法變成現實的人,留出空間。

因為未來,屬於願意動手去建造它的人。