Skip to content

podcast

The Challenge of a Scaled Scrum Team

Welcome back to another episode of Continuous Improvement! I'm your host, Victor, and today we'll be diving into the world of Nexus framework and scaled Scrum. As some of you may know, software development can already be quite challenging, but when multiple teams are working on the same product with numerous dependencies, the complexity reaches a whole new level.

In today's episode, we'll be exploring some of the major challenges faced in a scaled Scrum environment, as well as discussing potential solutions and the importance of cultivating the right mindset. So let's jump right in!

Our first challenge revolves around the role of the Product Owner in Nexus Sprint Planning. According to the Scrum Guide, the Product Owner holds the ultimate decision-making power. However, when multiple teams conduct their own sprint planning sessions after the Nexus Sprint Planning, it becomes difficult for the Product Owner to actively participate in each team's planning. Can you imagine addressing domain knowledge questions or making prioritization decisions for multiple teams simultaneously? It would be a time-consuming and overwhelming task.

One potential solution to this challenge is asynchronous scheduling. By staggering the sprint planning sessions across teams, the Product Owner can allocate their time more efficiently. Additionally, organizations may consider designating a group of Product Owners to ease decision-making, although it brings its own set of complexities.

Another challenge faced in scaled Scrum environments is visualizing Product Backlog Refinement. As dependencies arise, it becomes crucial to identify and minimize them. However, existing tools like JIRA and Trello often fall short in providing an easy way to visualize the progress or resolution of these dependencies. This can make it difficult for Scrum Masters to manage dependencies effectively, as they may not fully grasp the complex technical implications.

To overcome this challenge, organizations can explore specialized visualization tools or customizations within existing tools to cater to their specific needs. By having a clear visual representation of dependencies, teams can more effectively prioritize and address them during Product Backlog Refinement sessions.

Lastly, let's talk about reviewing Nexus Sprint through the lens of velocity. Integration work is an inevitable part of software development, but it can significantly impact a team's velocity. Each team works based on their own estimation baseline and agenda, making it unclear who should take responsibility for overlapping work. Integration tasks, such as setting up servers, automating tests, and resolving git code merge issues, are time-consuming and crucial, but they may not be fully accounted for in story points.

To address this challenge, teams can consider incorporating a dedicated Nexus Integration Team. This team would be responsible for handling cross-team integration tasks, ensuring smooth collaboration and addressing any post-integration issues that may arise. By having clear roles and responsibilities, teams can better manage their velocity and avoid misleading senior management with sudden drops due to integration work.

As we've explored these challenges, it's important to note that the mindset of the Nexus Integration Team is key to managing the complexity and unpredictability of software development. Meetings and tools are merely symptoms of a more fundamental challenge: getting everyone on the team, including organizational leaders, to understand and embrace agility.

By fostering a culture of continuous improvement and encouraging open communication, teams can overcome these challenges and create an environment where scaling Scrum becomes more manageable. It's not just about the process or the framework; it's about the people and their mindset.

And that's all we have for today's episode of Continuous Improvement! I hope you found our exploration of scaled Scrum and the Nexus framework insightful. Remember, it's not just about the challenges, but also about finding innovative solutions and embracing a mindset of agility and continuous improvement.

If you have any comments or experiences working in scaled Scrum environments, I'd love to hear from you. Feel free to reach out and share your thoughts. Until next time, this is Victor signing off. Stay agile, stay curious, and keep improving!

Internet Border Gateway Protocol (BGP)

Welcome back to another episode of Continuous Improvement, the podcast where we explore ways to enhance our knowledge and skills. I'm your host, Victor, and today, we're diving into the fascinating world of the Internet Border Gateway Protocol, commonly known as BGP.

BGP, a standardized exterior gateway protocol, plays a crucial role in exchanging routing and reachability information among different Autonomous Systems or Internet Service Providers on the Internet. It enables the integration of autonomous networks and facilitates connections between ISPs.

BGP was introduced back in 1989 with the goal of developing a protocol that provides policy control, loop detection, and scalability. Today, it serves as the foundational architecture of the global TCP/IP Internet.

One of the primary functions of BGP is enabling information exchange between autonomous networks without centralized control. This allows service providers to determine the best route for their customers' data, considering factors such as reachability, hop counts, and agreements with other providers.

BGP also plays a significant role in managing commercial issues among different service providers. For instance, ISPs may want to control excessive traffic to avoid additional costs, or they may have different routing policies based on contracts and agreements. BGP provides the flexibility to define the best routes according to these commercial considerations.

So, how does BGP work? Well, the current version of BGP is Version 4, published as RFC 4271 in 2006. Unlike traditional routing algorithms, BGP employs a path vector algorithm and uses path information stored in the AS_PATH attribute to avoid routing issues and loop avoidance.

BGP updates routing table information only when changes occur, ensuring efficient use of bandwidth and processing power. However, it lacks an automatic discovery mechanism, so peer connections must be established manually. These connections are maintained using TCP for reliable transport.

Let's take a closer look at the different BGP packet formats and their field functions. BGP messages are transmitted over TCP connections, and each message is processed only after it has been completely received.

The BGP message header format consists of fields such as Marker, Length, and Type. The Marker field is included for compatibility, while the Length field indicates the total length of the message, including the header. The Type field specifies the message's type code, such as Open, Update, Notification, or Keepalive.

The

heroImage: '/2017-06-04.png'---

Pseudo-Scrum - A Hybrid of Waterfall and Agile

Welcome to Continuous Improvement, the podcast where we explore the challenges of achieving true agility in today's organizations. I'm your host, Victor, and in today's episode, we're going to dive into why you might not be as agile as you think you are.

Picture this scenario: you've implemented all the scrum rituals, you have the tools and processes in place, but if the mindset isn't right, something fundamental is still missing. So, let's break it down, starting with the first reason why you might not be truly agile.

Reason number one: you have a detailed plan. Now, don't get me wrong, planning plays an essential role, but when the roadmap is fixed, the scope is unchanging, and the release plan is impractical, you're actually following a waterfall model. Scrum teams need the flexibility to adapt to change and align with top management's evolving priorities.

Moving on to reason number two: the absence of a true Scrum Master. Sure, you may have someone with the title on your org chart, but what's their actual role? Often, the Scrum Master is juggling multiple responsibilities, which leads to a lack of focus and derails the agile process. Even if you do have a dedicated Scrum Master, they may not have the authority or ability to address real impediments, hindering the team's progress.

Reason number three: no designated Product Owner. Someone needs to be in charge of the product, providing a clear vision and taking ownership. However, many times, the person in this role is preoccupied with other priorities, causing feature development to go off track. It's essential to have a Product Owner who can make informed decisions and guide the team effectively.

Now let's talk about reason number four: the lack of a budgeting strategy. Story points are not a substitute for proper budgeting. Manipulating estimates to secure more funds or negotiating downward to meet budget constraints only distorts the team's true velocity. Traditional accounting methods often clash with agile development, leading to burnout and compromised outcomes.

Finally, let me share my take on the Agile Manifesto. Prioritize responsiveness to change over adhering to a strict roadmap set by senior management. Value individuals and interactions over office politics. Emphasize working software over endless, pointless meetings. And most importantly, favor customer collaboration over budget negotiations. It's not an easy task, but it's the only way for bureaucratic organizations to adapt and thrive in the digital age.

And that's a wrap for today's episode of Continuous Improvement. I hope you've gained valuable insights into the key factors that may be hindering your organization's agility. Remember, it's not just about going through the motions, but embracing the mindset of continuous improvement.

Join me next time as we explore strategies to overcome these challenges and truly unlock the power of agility within your organization. Until then, keep striving for progress and continuous improvement.

Deploying a Koa.js Application to an AWS EC2 Ubuntu Instance

Hello everyone, and welcome to "Continuous Improvement," the podcast where we explore different strategies and techniques for improving our skills and knowledge in the technology world. I'm your host, Victor, and in today's episode, we're going to dive into deploying a Koa.js application on an Amazon Web Services (AWS) Ubuntu server.

But before we begin, a quick reminder to subscribe to our podcast on your favorite platform and follow us on social media to stay updated on all our latest episodes. Alright, let's get started!

The first step in deploying our Koa.js application is to launch an Ubuntu instance on AWS. Now, it's important to modify the security group settings to ensure our application is accessible.

As you can see in the images provided in the blog post, it is necessary to add inbound rules for HTTP port 80 and HTTPS port 443. Without these changes, accessing the public domain in a browser would result in a "Connecting" state, eventually timing out and rendering the site unreachable.

Now that we have our Ubuntu instance set up, the next step is to install Node.js, the runtime environment for our Koa.js application. SSH into your instance and follow the official documentation instructions to install Node.js.

With Node.js successfully installed, we now move on to setting up Nginx as a reverse proxy server. Nginx will help us route traffic to our Koa.js application.

First, we need to install Nginx by running the appropriate commands. Once that's done, we'll open the Nginx configuration file and make the necessary edits, including adding the server block with the reverse proxy settings. Don't forget those semicolons!

After saving the configuration file, we need to restart the Nginx service to apply the changes.

Now that our server and reverse proxy are set up, it's time to deploy our Koa.js application. Clone your Git repository into the /var/www/yourApp directory on the Ubuntu instance. Keep in mind that you may encounter a "Permission Denied" error, but it can be easily fixed by changing the ownership of the folder.

Great! With the application files in place, it's time to create a simple app.js file to run our Koa.js server. The code in this file sets up a basic Koa.js server with a logger and a response that says "Hello World".

We're almost there! Just a few more steps. Start the server by running the node app.js command in the terminal.

And finally, open your browser and navigate to your public domain. If everything was done correctly, you should now see your Koa.js application running.

Congratulations! You've successfully deployed your Koa.js application on an AWS Ubuntu server. I hope this step-by-step guide has been helpful to you. If you have any questions or need further assistance, please feel free to leave a comment on the blog post.

That wraps up this episode of "Continuous Improvement." I hope you found the information valuable and that it inspires you to continue expanding your skills and knowledge. Don't forget to subscribe to our podcast and follow us on social media for more episodes like this one. Thanks for tuning in, and until next time, keep improving!

heroImage: '/2017-01-07.png'---

Lessons Learned from an IoT Project

Hello, and welcome to Continuous Improvement, the podcast where we explore the challenges and triumphs of project development in the ever-evolving landscape of technology. I'm your host, Victor, and today we're discussing a topic close to my heart: the experience of working on an Internet of Things project.

Last year, I had the opportunity to work on a fascinating project focused on a Bluetooth smart gadget. But let me tell you, it was quite a departure from pure software development. Today, I want to share with you some of the unique challenges I faced and the lessons I learned along the way.

One of the major challenges I encountered was the integration of various components. You see, different aspects of the project, such as mechanical, firmware, mobile app, and design components, were outsourced to multiple vendors. And to make things even more complex, these vendors had geographically dispersed teams and different work cultures. It was like putting together a puzzle with pieces from different boxes.

When developers are so specialized that they work in silos, the standard Scrum model doesn't function as effectively. Collaboration becomes essential, and that's when effective communication truly shines.

Another hurdle I faced was the difference in duration between hardware and software iterations. Unlike software, which can be easily modularized, hardware iterations take a much longer time. This made adapting to changes and delivering a Minimum Viable Product (MVP) for consumer testing quite challenging. And without early user feedback, prioritizing features became a tough task. It almost felt like a waterfall-like approach in a fast-paced technology world.

Additionally, diagnosing issues became a puzzle of its own. With multiple components from different vendors, it was difficult to determine whether problems stemmed from mechanical design, firmware, or mobile app development. End-to-end testing also grew more complex as interfaces evolved. And without comprehensive hardware automation, testing became a time-consuming process.

So, what did I learn from these unique challenges? Well, it all comes down to effective communication and problem-solving mindset. Empathy is crucial. Instead of pointing fingers or becoming defensive, it's vital to understand issues from the other person's perspective. Building strong interdepartmental relationships is essential for the success of any IT project.

Customers judge the performance of a product based on the value they derive from it. By adopting an empathetic and problem-solving mindset, we can reduce wasted time and effort, ultimately improving overall performance.

And with that, we've reached the end of today's episode. I hope you found my insights into IoT project development valuable. Remember, embracing continuous improvement is key to succeeding in this ever-changing landscape.

Join me on the next episode of Continuous Improvement, where we'll dive into another fascinating topic. Until then, happy developing!

How to Fix iOS 10 Permission Crash Errors

Welcome to Continuous Improvement, the podcast where we delve into the world of app development and discuss common issues developers face on a regular basis. I'm your host, Victor, and in today's episode, we're going to address a problem that many of us have encountered - app crashes after an operating system update. Specifically, we'll be focusing on an error related to privacy-sensitive data access while using the microphone on iOS 10.

So, picture this: You've developed an amazing app that runs smoothly on iOS 9. Everything is going great until you make the daring decision to upgrade to iOS 10. Suddenly, your app starts crashing, leaving you puzzled and frustrated. But fear not, my fellow developers! I am here to guide you through this ordeal.

The error message that appears in the terminal states, "This app has crashed because it attempted to access privacy-sensitive data without a usage description. The app’s Info.plist must contain an NSMicrophoneUsageDescription key with a string value explaining to the user how the app uses this data." Quite a mouthful, right?

The solution is quite straightforward. To resolve this crash caused by microphone access, we need to make a quick edit in the Info.plist file. Essentially, we'll be adding a description about why our app needs microphone access, so that it complies with iOS 10's privacy requirements.

So, let's jump into it. Open your Info.plist file as source code and insert the following lines:

    <key>NSMicrophoneUsageDescription</key>
    <string>Provide a description explaining why your app needs microphone access.</string>

By adding this snippet to your Info.plist file, you're providing a clear message to users about why your app requires microphone access. This is a crucial step to ensure compliance with iOS 10's privacy rules.

Now, let's not forget about potential crashes related to camera or contacts access. If your app requires these permissions, be sure to include the appropriate lines in your Info.plist file as well.

For camera access:

    <key>NSCameraUsageDescription</key>
    <string>Provide a description explaining why your app needs camera access.</string>

And for contacts access:

    <key>NSContactsUsageDescription</key>
    <string>This app requires access to your contacts.</string>

Remember, providing users with clear and concise explanations for why your app needs these privacy-sensitive permissions is vital to maintaining user trust and satisfaction.

And that's it! By making these edits, you'll be able to successfully prevent crashes caused by privacy-sensitive data access after updating to iOS 10.

Well, that's all for today's episode. I hope you found this information useful and it helps you overcome the microphone access crash issue.

If you have any questions or topics you'd like me to cover in future episodes, feel free to reach out to me on Twitter @VictorDev.

Thanks for tuning in to Continuous Improvement. Until next time, happy coding!

The Future of FinTech in Hong Kong

Welcome, everyone, to another episode of "Continuous Improvement." I'm your host, Victor. Today, we're diving into a topic that hits close to home for us here in Hong Kong - the FinTech revolution. Now, it's no secret that Hong Kong is an international financial center, but it's time to take a hard look at where we stand in the world of FinTech.

You see, while we enjoy economic success in our highly competitive corporate environment, our neighbors in Singapore have seized the opportunity and aggressively moved ahead in the FinTech race. The Singaporean government has played a crucial role in attracting FinTech companies by providing incentives and clear regulations. Furthermore, mainland China's FinTech firms have thrived on the extensive client base available to them.

The challenge is clear - Hong Kong's risk-averse mentality is slowing the progress of our own FinTech industry. Many individuals in the banking sector express concerns about disruptive technologies like blockchain, Bitcoin, and mobile payments. They fear that these innovations could jeopardize their businesses and result in failure to adapt.

But here's where the silver lining comes in. Hong Kong is home to a diverse group of innovative and creative individuals. We have the potential to assemble outstanding teams that can inspire and contribute to the creation of the world's best FinTech ecosystem. It's time to elevate our awareness and reimagine what is possible for our city when financial technology serves as a catalyst for positive industry transformation.

In my opinion, this is the desired outcome - guiding global financial technology to become more human-centered. We're fortunate to have a legal sandbox policy that allows companies to test their innovative ideas in the marketplace. These financial technologies have the potential to positively impact lives around the globe. Together, let's utilize the language and tools of FinTech to reestablish Hong Kong as the regional hub for FinTech commerce.

Before we wrap up for today, I encourage you all to join the conversation. What steps do you think Hong Kong needs to take to catch up in the FinTech revolution? Share your thoughts and ideas with us via our website or social media channels.

That's all for today's episode of "Continuous Improvement." Thank you for tuning in, and remember, growth comes through continuous improvement. Until next time!

What is Blockchain and How is It Used?

Hello and welcome to Continuous Improvement, the podcast where we explore the latest advancements and innovations shaping our world. I'm your host, Victor, and in today's episode, we will delve into the exciting topic of blockchain technology.

Many of my friends have been asking me about the emergence of the blockchain revolution, and I must say, the possibilities are truly remarkable. According to recent news, four of the world's largest banks have teamed up to develop a new form of digital cash. This digital cash aims to become an industry standard for clearing and settling financial trades over blockchain technology. Meanwhile, Ripple has raised $55 million in Series B funding, highlighting the growing interest and investment in this field.

So, let's start by understanding what exactly blockchain is. Simply put, it is a data structure that serves as a digital ledger for transactions. What sets it apart is that this ledger is shared among a distributed network of computers, numbering in the millions. Utilizing state-of-the-art cryptography, the technology securely manages the ledger.

Blockchain operates on a consensus model where every node agrees to every transaction, eliminating the need for a central counterparty in traditional settlement processes. This offers broad implications for cross-currency payments by making them more efficient, eliminating time delays, and reducing back-office costs.

But how is blockchain used in practice? Well, it allows for direct bank-to-bank settlements, enabling faster and lower-cost global payments. Some applications of this technology include remittance services for retail customers, international transactions, corporate payments, and cross-border intra-bank currency transfers.

The innovation lies in the fact that transactions can occur without needing to know who the other party is. This feature, coupled with the idea of a distributed database, where trust is established through mass collaboration rather than a centralized institution, sets the stage for many exciting possibilities.

So, what problems could be solved with blockchain? Well, it goes beyond the financial market. This technology could provide an immutable record that can be trusted for various uses. In a blockchain, once a block of data is recorded, it becomes very difficult to alter. This can be used for genuine privacy protection. Blockchain could also serve as the basis for an open protocol for web-based identity verification, creating a 'web-of-trust' and storing data in an encrypted format.

The potential of blockchain is enormous, and its ability to disrupt traditional banking is evident. With its decentralized nature and secure transactions, it has the power to reshape the way we handle cross-border payments and even how we establish trust in various aspects of our lives.

Well, that's all we have time for today on Continuous Improvement. I hope you found this episode informative and thought-provoking. Stay tuned for more exciting discussions on the advancements and innovations shaping our world.

Installing Jupyter Notebook on macOS

Hello and welcome back to "Continuous Improvement," the podcast where we explore practical tips and techniques for personal and professional growth. I'm your host, Victor, and in today's episode, we'll be discussing the process of installing Jupyter Notebook using the Anaconda distribution.

If you're an aspiring data scientist or simply someone interested in coding and data analysis, Jupyter Notebook is an incredibly useful tool. It allows you to create and share documents that contain live code, equations, visualizations, and narrative text.

So, let's dive right in!

The first step in installing Jupyter Notebook is to download the Anaconda distribution. Head over to https://www.anaconda.com/products/distribution and click on the download link. This will take you to the Anaconda website, where you can find the installer for your operating system.

Once you have downloaded the Anaconda installer, it's time to install it on your machine. Run the installer and follow the graphical prompts that appear on your screen. The installation process is pretty straightforward, but if you encounter any issues, make sure to check the Anaconda documentation for troubleshooting tips.

Once the installation is complete, you might want to test if Jupyter Notebook is working properly. Open your terminal or command prompt and type the following command:

jupyter notebook

However, you might encounter an error at this point. Don't worry, it's a common issue. The error message could be something like:

> zsh: command not found: jupyter

The reason you're seeing this error is because the conda command is also not found. But fret not, there's a simple solution to get things running smoothly.

Open your .zshrc file with your preferred text editor. You can do this by typing:

vim ~/.zshrc

In the .zshrc file, add the following line at the bottom:

export PATH="$HOME/anaconda3/bin:$PATH"

Save the file and close the text editor. Now, it's time to restart your shell. Close and reopen your terminal or command prompt, and now you can try running Jupyter Notebook once again.

Great! Now Jupyter Notebook should be accessible at http://localhost:8888/. You can start creating your notebooks and explore the world of data analysis, visualization, and coding.

That's all for today's episode of "Continuous Improvement." I hope you found this tutorial on installing Jupyter Notebook using the Anaconda distribution helpful. Remember, continuous improvement is key to personal and professional growth, so keep exploring, learning, and enhancing your skills.

If you have any questions or suggestions for future episodes, feel free to reach out to us. You can find us on Twitter, Instagram, or Facebook at @continuousimprovementpodcast.

Take care, and until next time!

Launching RancherOS on AWS EC2

Welcome back to another episode of Continuous Improvement, the podcast dedicated to helping you enhance your skills and knowledge in the world of technology. I'm your host, Victor, and today we are diving into the world of RancherOS, a Linux distribution specifically designed for running Docker containers.

But before we dive in, I want to remind you to subscribe to our podcast wherever you listen to your favorite shows, so you never miss an episode. And if you have any questions or suggestions for future topics, feel free to reach out to us on our website or social media channels. Okay, let's get started!

Today, we're focusing on a step-by-step guide for setting up RancherOS on AWS. Now, there is an AMI available in the AWS Marketplace, but there are some additional configurations and security group setups that can be a bit tricky. And that's where this guide comes in as the missing manual. So, let's jump right into it.

STEP 1: Launch an Instance with the Rancher AMI. Assuming you already have a .pem key, go ahead and launch an instance and select the Rancher AMI.

STEP 2: Connect to Your Instance. Open a terminal and connect to your instance using SSH. It's important to note that you should use the 'rancher' user instead of root.

ssh -i "XXX.pem" rancher@ec2-XX-XXX-XX-XX.ap-southeast-1.compute.amazonaws.com

STEP 3: Verify the Rancher Server. Check if the Rancher server is already running by executing the following command:

docker ps

If it's not running, download and start the server using Docker:

docker run -d -p 8080:8080 rancher/server

STEP 4: Configure Security Groups. Head over to the Security Group tab in the AWS console and create a new security group with the appropriate inbound rules. These rules should include ports for Docker Machine, Rancher network, UI, and the site you deploy.

STEP 5: Assign the New Security Group. Select the instance and navigate to Actions > Networking > Change Security Group. Choose the new Security Group ID and assign it to your instance.

STEP 6: Access the Rancher UI. Open a browser and enter the Public DNS with port 8080, for example: http://ec2-XX-XXX-XX-XX.ap-southeast-1.compute.amazonaws.com:8080. You should now see the Rancher UI.

STEP 7: Add Host Using AWS Credentials. To add a host with Amazon EC2, you'll need the Access Key and Secret Key. If you don't have them, navigate to AWS Console > IAM > Create New Users and download the credentials.csv file. Attach the required policy to the user by searching for "AmazonEC2FullAccess".

STEP 8: Enter AWS Credentials in Rancher UI. Return to the Rancher UI and enter the newly generated Access Key and Secret Key from the credentials.csv file. Fill out the necessary information, and voila! You'll have your host up and running.

POSTSCRIPT: For those of you looking to manage Docker's secret API keys, certificate files, and production configuration, you can explore the beta integration of Vault based on your specific needs.

And that's it for today's episode of Continuous Improvement. I hope this step-by-step guide helps you navigate the process of setting up RancherOS on AWS. Remember, practice makes perfect, so don't be afraid to experiment and learn along the way.

Thank you for tuning in! Make sure to join us next time when we explore more exciting topics and dive deeper into the world of technology. Until then, keep improving and keep learning.

This has been Victor, your host of Continuous Improvement, signing off. Stay curious, my friends.