Skip to content

podcast

Migrating WordPress MySQL Database to AWS RDS

Hello everyone, and welcome to another episode of Continuous Improvement. I'm your host, Victor, and today we have an exciting topic to discuss - migrating your local WordPress MySQL database to Amazon Web Services Relational Database Service, also known as RDS. If you've been considering this migration for improved performance, database scalability, and easier maintenance, then you've come to the right place. So let's dive right into it.

The first step is to navigate to the AWS console and choose RDS. From there, you'll need to create a MySQL database. Simply fill in the form as shown, including the DB Instance Identifier, Master Username, and Master Password. Most of the settings can be left at their default values, including the default VPC. If you're unsure, don't worry, we'll guide you through the process.

Next, it's time to back up your existing WordPress database. SSH into your WordPress instance and use the command mysqldump -u root -p [YourDatabaseName] > backup.sql to create a backup of your database. Remember to replace [YourDatabaseName] with the actual name of your database, such as bitnami_wordpress. This backup file will be crucial in the migration process.

Now that you've backed up your database, it's time to import it into your newly created AWS RDS instance. Use the command mysql -u admin -p -h [RDS_ENDPOINT] -D wordpress < backup.sql to import the backup. Just remember to replace admin and [RDS_ENDPOINT] with your own values. If you encounter any issues during this step, we've got your back.

In case you come across an error stating "ERROR 1049 (42000): Unknown database 'wordpress'," it means your wordpress database hasn't been created yet. Don't worry, it's an easy fix. Start by connecting to the database using mysql -h [RDS_ENDPOINT] --user=admin --password=[YourPassword]. Once connected, create a new database with the command mysql> CREATE DATABASE wordpress;. Make sure you exit MySQL after creating the database.

Lastly, you'll need to edit the wp-config.php file in your WordPress EC2 instance. This file is typically located in your WordPress directory, such as /home/bitnami/apps/wordpress/htdocs if you're using Bitnami. Update the values for DB_NAME, DB_USER, DB_PASSWORD, and DB_HOST to match your newly created RDS instance. Once you've made the changes, save the file, and you're all set!

And that's it! You've successfully migrated your local WordPress MySQL database to AWS RDS. Now you can enjoy improved performance, scalability, and easier maintenance of your website. If you have any questions or need further assistance, feel free to reach out. We're here to help you every step of the way.

Thank you all for tuning in to this episode of Continuous Improvement. I hope you found this guide helpful and that it inspires you to take advantage of the benefits AWS RDS can offer for your WordPress database. Stay tuned for more episodes focused on helping you improve your workflows, technology, and more. I'm Victor, your host, signing off. Until next time!

Removing .DS_Store Files from Git Repositories

Welcome back to another episode of Continuous Improvement, the podcast where we discuss tips and tricks for enhancing your productivity and problem-solving abilities. I'm your host, Victor, and in today's episode, we'll be addressing a common issue faced by Mac users who also use Git.

If you're a Mac user who has accidentally committed a .DS_Store file, you might be wondering why this is a problem and how it can confuse your Windows colleagues. Well, fear not, because today I'll be sharing some insights on what these files are and how you can avoid committing them.

So, what exactly is a .DS_Store file? The 'DS' in .DS_Store stands for Desktop Services, and these files are used by Macs to determine how to display folders when you open them. They store custom attributes such as the positions of icons and are created and maintained by the Finder application on a Mac. Normally, these files remain hidden from view.

However, for Windows users, these .DS_Store files aren't useful and can cause confusion when they find them in a Git repository. Fortunately, there's a straightforward solution to remove these files from your repository.

To remove existing .DS_Store files, you'll need to run a simple command in your terminal. Here's what you need to type:

    find . -name .DS_Store -print0 | xargs -0 git rm -f --ignore-unmatch

This command will find all the .DS_Store files in your repository and remove them. Remember to commit the changes and push them to your remote repository to ensure these files are permanently removed.

But how do you prevent .DS_Store files from being added again in the future? It's simple! You just need to edit your .gitignore file and add the following line:

    .DS_Store

By adding this line to your .gitignore file, you're telling Git to ignore any .DS_Store files that are present in your repository.

By following these steps, you'll address any concerns your Windows colleagues may have about these files and maintain a cleaner, more organized Git repository.

That concludes today's episode of Continuous Improvement. I hope you found this information valuable and will be able to implement these steps in your own Git workflow. Thank you for tuning in, and remember to keep striving for continuous improvement!

Ignoring Already Modified Files in Git

Welcome to the Continuous Improvement podcast, where we explore tips, tricks, and strategies for enhancing your personal and professional growth. I'm your host, Victor, and in today's episode, we'll be discussing how to handle a rare scenario in Git when you want to modify a file without committing the changes.

Have you ever found yourself in a situation where you need to modify a file, but you don't want to include those changes in your Git commit? Perhaps you're experimenting with some code, or you have local configuration changes that are specific to your environment. Well, there's actually a solution for this, and today we'll be diving into it.

Now, typically, when you want to exclude a file or a directory from being tracked by Git, you would use the .gitignore file. However, this method won't work if the file is already being tracked. So what should we do in such cases?

The solution lies in the git update-index command. By using this command, we can manually ignore specific files without modifying our .gitignore file. Let me walk you through the process.

To ignore a file and prevent it from being committed, you need to execute the following command in your Git terminal:

    git update-index --assume-unchanged <file path>

Let's break this down. --assume-unchanged is the flag used to tell Git to ignore the file, and <file path> refers to the specific file you want to exclude. By executing this command, Git will no longer consider any changes made to that file when you're committing your code.

Now, what if you want to start tracking the file again and include its changes in future commits? No worries. You can simply revert this action by using the following command:

    git update-index --no-assume-unchanged <file path>

So, in essence, this command undoes the ignore action and allows Git to track any future modifications you make to the file.

It's important to note that while using --assume-unchanged, if you make any changes to the file and try to switch branches, Git might prompt you to either stash or discard those changes. So, be cautious and ensure you're aware of the potential consequences.

And there you have it - a simple and effective way to modify a file without committing the changes to Git. Remember to use the git update-index command with the --assume-unchanged flag to ignore the file, and the --no-assume-unchanged flag to revert it.

I hope you found this tip useful, and if you have any questions or need further clarification, feel free to reach out. The power of Git lies in its flexibility, and it's always good to know these tricks that can make your version control workflow smoother.

That wraps up today's episode of Continuous Improvement. Thank you so much for tuning in and joining me on this journey of growth. If you enjoyed the show, don't forget to subscribe and leave a review. Stay tuned for more insights and strategies to help you continuously improve. Until next time, I'm Victor, signing off.

Find and Kill Processes Locking Specific Ports on a Mac

Welcome to "Continuous Improvement," the podcast where we explore tips, tricks, and solutions for overcoming development hurdles. I'm your host, Victor, and in today's episode, we'll be discussing a common issue many developers encounter while working with Node.js servers: the dreaded port lock error.

Have you ever tried to start a local Node.js server only to find out that the port you want to use is already in use and locked? It can be quite frustrating, but fear not, because today we have a solution for you.

The problem occurs when you try to start your server and receive an error message that says, "Error: listen EADDRINUSE..." followed by the IP address and port number. But worry not, there's a way to identify which process is locking that port.

One method is to utilize the command-line tool called lsof. Simply open your terminal, and type in lsof -n -i4TCP:<port>. Replace <port> with the specific port number you want to investigate, such as 8080.

Once you execute this command, you'll be provided with a list of processes currently using that port. Take note of the process you wish to terminate. For example, you might see a process like node running with a PID (Process ID) of 6709.

Now comes the moment of truth. Execute the following command to kill the process and free up the locked port: kill -9 <PID>. Remember to replace <PID> with the actual Process ID you want to terminate, in this case, 6709.

Once you've successfully terminated the process, you're almost out of the woods. Now you can restart your server, and it should run normally without encountering the port lock error.

And there you have it! A simple yet effective solution for tackling the port lock issue in Node.js. Remember to use the lsof command to identify the process locking the port, and then terminate it using kill -9 <PID>.

That concludes today's episode of "Continuous Improvement". I hope this solution will help you overcome any port-related obstacles you may encounter in your development journey. Thanks for listening, and until next time, keep striving for continuous improvement in your coding endeavors.

How to Work with a Product Manager as a Software Engineer

Welcome to "Continuous Improvement," the podcast where we delve into the world of software engineering and explore ways to enhance our skills and improve our professional lives. I'm your host, Victor, and in today's episode, we'll be discussing a common challenge faced by software engineers – working effectively with product managers. Whether you've had great experiences or not-so-great ones, we'll explore some practical advice to help you navigate these working relationships. So let's jump right in!

One of the primary difficulties when working with product managers is the gap in technical understanding. As software engineers, we often face challenges that go beyond what seems apparent to others. It's frustrating to hear phrases like, "It's just a simple button. Can't you finish it quickly?" These comments undermine the complexity of our work and can lead to a lack of mutual respect.

Take, for example, the search button on Google's homepage. It may seem simple, but it's not just an ordinary button. From considering different states like hover, click, double-click, to accounting for text localization, accessibility, and various screen widths – it requires meticulous attention to detail. By educating PMs about these complexities with empathy and kindness, we can bridge the gap and foster a more collaborative environment.

Another challenge arises when the roles and responsibilities between engineers and product managers are misunderstood. In hierarchical organizational structures, or when outsourced vendors manage in-house PMs, it's easy to fall into the belief that PMs are our bosses. However, PMs are accountable for the product, not our direct supervisors. Implementing methodologies like Scrum can help establish boundaries and shape realistic expectations.

When requirements constantly change, it not only disrupts our workflow but also hampers the quality of our code. Non-reusable code, bugs, and technical debt become persistent issues, leading to a stressful work environment. Clear communication and collaboration are key to navigating these challenges successfully.

Lastly, a lack of clear objectives can be a significant hindrance to software engineers. We thrive on tackling challenges and require clear goals to measure our impact and achieve success. When PMs fail to define specific requirements and provide a clear vision, it's important to approach them constructively and communicate the need for clarity. By highlighting the importance of well-defined objectives, we can create a more productive working environment.

To sum it all up, let me share three recommendations for working effectively with PMs:

  1. Treat non-technical stakeholders with empathy and kindness while educating them about the technical complexities you face.
  2. Keep in mind that PMs are not your bosses; foster a collaborative environment and be willing to share credit for successes.
  3. Stay updated on industry trends and be prepared to construct persuasive arguments when you believe the requirements are flawed.

Remember, software development is a team sport, and effective communication, collaboration, and leadership are vital for success. By embracing these practices, we can enhance our working relationships with product managers and create a more productive and enjoyable work environment.

That's all for this episode of "Continuous Improvement." I hope you found these insights helpful in navigating your interactions with product managers. If there's a specific topic you'd like me to cover in a future episode, feel free to reach out and let me know. Until next time, keep striving for continuous improvement in your software engineering journey!

Registering Sling Servlets in Adobe Experience Manager

Hello, and welcome to "Continuous Improvement," the podcast that provides tips, insights, and strategies for enhancing your skills in software development. I'm your host, Victor.

In today's episode, we'll dive into the world of Adobe Experience Manager and explore how to handle RESTful request-response AJAX calls using Sling servlets. We'll discuss two methods to register these servlets in AEM - by path and by resourceType. So, let's get started!

Sling servlets, written in Java, are designed to handle specific AJAX calls within AEM applications. They can be registered as OSGi services and are useful for executing various tasks based on incoming requests.

Let's begin with the first method - registering a servlet by path. Imagine you want to handle a form POST request at the path /bin/payment. To do this, you'll need to annotate your servlet class using the following code:

[Code Mention]

This annotation ensures that your servlet is triggered when a POST request is sent to http://localhost:4502/bin/payment. The doPost method within the servlet class will be invoked, allowing you to perform your desired tasks.

It's important to have a local AEM instance running on port 4502 and install the bundle module using the Maven bundle plugin before registering your servlet. You can check if the bundle is installed by navigating to http://localhost:4502/system/console/bundles. If it's not installed, you can manually upload the JAR file.

Now, what happens if you encounter a "forbidden" error when trying to serve a request to /bin/payment? Don't worry; I've got you covered!

Here's what you can do:

  1. Go to http://localhost:4502/system/console/configMgr.
  2. Search for 'Apache Sling Referrer Filter'.
  3. Remove the POST method from the filter. This step allows triggering the POST method from any source.
  4. Locate Adobe Granite CSRF Filter.
  5. Remove the POST method from the filter methods as well.
  6. Save the changes and give your servlet another try.

By following these steps, you should be able to resolve the "forbidden" error and successfully trigger your servlet.

Now, let's move on to the second method - registering a servlet by resourceType. This approach is more flexible and avoids the aforementioned issues. Here's how you can do it:

[Code Mention]

Refactor your servlet by using this annotation and specify the desired resourceType. For example, services/payment or any other resourceType that matches your servlet. This way, your servlet will be triggered by requests to pages with the specified resourceType.

To test your servlet, you'll need to create a page that triggers its resourceType:

  1. Go to CRXDE Lite at http://localhost:4502/crx/de/index.jsp.
  2. Inside the /content folder, create a page, for example, http://localhost:4502/content/submitPage.html.
  3. In the resourceType properties, enter services/payment or the corresponding resourceType from your servlet.
  4. Save your changes and test the POST request by visiting http://localhost:4502/content/submitPage.html. It should work as expected.

An extra tip for you! You can use the Apache Sling Resource Resolver at http://localhost:4502/system/console/jcrresolver to verify if your servlet has been successfully registered.

And that wraps up today's episode of "Continuous Improvement." We explored the world of Sling servlets in Adobe Experience Manager, discussing how to register them by both path and resourceType.

Thank you for joining me, Victor, your host, on this journey of continuous improvement. I hope you found today's episode valuable in expanding your skills as a software developer.

If you have any questions or comments, feel free to reach out in the comments section of the associated blog post.

Don't forget to subscribe to "Continuous Improvement" for more insightful episodes and updates. Until next time, keep striving for excellence and embracing the world of continuous improvement.

Installing Nextcloud on AWS EC2 with S3 Storage

Welcome to another episode of Continuous Improvement! I'm your host, Victor, and today we're diving into the world of privacy and data control. In a recent blog post, I shared my journey of minimizing the use of Google products and opting for alternatives that provide more control over our personal data. One of the key changes I made was switching from Google Drive to Nextcloud, a self-hosted cloud storage solution. So, if you're interested in learning how to install Nextcloud on AWS EC2 and configuring it to use S3 storage, you're in the right place! Let's get started.

The first step is to install Nextcloud, and to do that, we'll be using the Snap package manager. Open your terminal and enter the command: sudo snap install nextcloud. This will initiate the installation process. Once completed, move on to the next step.

Now that we have Nextcloud installed, let's create an admin user account. In your terminal, simply type: sudo nextcloud.manual-install, followed by your desired username and password. Make sure to remember these login credentials!

With our admin account set up, we need to add our trusted domain. This allows Nextcloud to verify incoming requests from a specific domain. In your terminal, execute the command: sudo nextcloud.occ config:system:set trusted_domains 1 --value=. Remember to replace with your actual domain.

Next, we need to create an A record on AWS Route 53 that points to the IP address of our Nextcloud server. This ensures that when we navigate to our domain, it will correctly link to our Nextcloud instance.

Security is important, so let's set up an SSL certificate with Let's Encrypt to enable secure communication between our Nextcloud server and clients. In your terminal, type: sudo nextcloud.enable-https lets-encrypt. This will initiate the SSL certificate creation process and ensure that your data remains encrypted.

Now, it's time to put our setup to the test. Open your browser and navigate to your domain. You should now see the Nextcloud login page. Enter your admin username and password that we set up earlier, and voila! You're now logged in to your Nextcloud instance.

To enhance the functionality of Nextcloud, we can enable some useful apps. In your Nextcloud dashboard, click on "Apps" and enable both the "Default encryption module" and "External storage support." These additions will provide added security and allow us to integrate external storage options.

Speaking of external storage, let's set up Nextcloud to use S3 storage on AWS. We'll start by creating a new user with programmatic access in the AWS Identity and Access Management (IAM) console. This will generate a set of access keys that we'll need later.

After creating the user, it's time to define a policy that grants our Nextcloud instance access to the S3 bucket. In your IAM console, create a new policy using the provided JSON code in our blog post. Replace NAMEOFYOURBUCKET with the name of your S3 bucket and attach this policy to the newly created user.

Now that our AWS setup is complete, let's configure Nextcloud to connect with our S3 storage. In the Nextcloud settings, select "External Storage." Fill in the "Bucket" field with NAMEOFYOURBUCKET. Check "Enable SSL" and "Enable Path Style," and enter the required information using the access keys of the user we created earlier.

And that's it! You've successfully installed Nextcloud on AWS EC2 and configured it to use S3 storage. You can now navigate to your designated folder and start uploading files securely. Enjoy the control and peace of mind that Nextcloud brings to your personal data management.

Thank you for tuning in to this episode of Continuous Improvement. I hope you found the information helpful and empowering. If you enjoyed this podcast, please subscribe and leave us a review. And remember, continuous improvement is the key to unlocking our full potential. Until next time!

Debugging PHP Code in the Browser

Welcome to "Continuous Improvement," the podcast where we explore different techniques and strategies for continuously improving our coding skills. I'm your host, Victor, and in today's episode, we'll be diving into a helpful trick for outputting to the console in PHP.

Hey there, fellow developers! Have you ever found yourself in a troubleshooting situation when working with PHP? You know, that moment where you just wished you could use a simple console.log like in JavaScript. Well, today I have a neat little trick to share with you.

In JavaScript, debugging directly in the browser console is a breeze. But when it comes to PHP, things can get a bit tricky. However, fear not! With this technique, you'll be able to accomplish the same thing. So, let's dive into the steps.

Step one, create a function named debug_to_console. This function will handle the output to the console. You can add the following code to your PHP file:

function debug_to_console($data) {
    $output = $data;
    if (is_array($output)) {
        $output = implode(',', $output);
    }

    echo "<script>console.log('Debug Objects: " . $output . "');</script>";
}

Step two, when you need to output something to the console, insert the following code:

debug_to_console("Test");

And voila! You should see the desired output in your browser's developer tools console.

But wait, there's more. Step three allows you to go even further by debugging objects logged as JSON strings. Here's how you can do it:

debug_to_console(json_encode($foo));

By encoding the object as a JSON string, you can easily log complex objects and analyze their contents in the console.

And there you have it! A nifty little trick to output to the console while working with PHP. Remember, continuous improvement is key to becoming a better developer.

That's all for today's episode of "Continuous Improvement." I hope you found this PHP console logging technique helpful. Stay tuned for more coding tips and tricks in our future episodes.

If you have any suggestions for topics you'd like us to cover or any questions you'd like me to answer, feel free to reach out on our website, continuousimprovement.com. Keep coding and keep improving!

Installing Ubuntu 19.10 on a MacBook Pro 13,1

Welcome to Continuous Improvement, the podcast where we explore ways to make our lives better, one step at a time. I'm your host, Victor, and today we're going to talk about a topic that might interest our fellow software developers out there. Have you ever found yourself frustrated with a particular operating system? Well, I certainly have, and today I want to share with you my journey from macOS to Ubuntu on my MacBook Pro.

As a software developer, having the right tools and environment to work in is essential. But sometimes, the operating system you're using can prove to be a roadblock to your productivity. That's when I decided to explore an alternative, and after some research, I found that Ubuntu could be the answer.

One of the reasons I wanted to switch from macOS Catalina to Ubuntu was the amount of disk space that Xcode and its bundled tools were taking up. Up to 10GB of disk space just for one software package! As a developer, I couldn't afford to waste precious time waiting for slow downloads and updates to finish.

Now, the first concern that popped into my mind was whether my MacBook Pro hardware would be compatible with an open-source Linux distribution like Ubuntu. But to my surprise, thanks to the efforts of the community, many features worked right out of the box with a fresh Ubuntu install. The screen, keyboard, touchpad, and Wi-Fi all worked seamlessly. The only feature that required a workaround was audio, which I managed to solve by using USB Type-C headphones or connecting to an external monitor with built-in speakers.

If you're curious about trying Ubuntu on your MacBook Pro, the process is actually quite simple. First, you'll need to download Ubuntu 19.10 from the official Ubuntu website. Once you have the ISO file, you'll create a bootable USB stick using a tool called Etcher. There's a helpful guide available on the Ubuntu website that will walk you through this step-by-step. After that, restart your MacBook, press the Option key, and select the USB stick as the boot device. From there, you can try Ubuntu and proceed with the installation if it suits your needs.

As a developer, I found that setting up essential tools like Git on Ubuntu was a breeze. With a simple command, you can install Git and start using it right away. This is a much more straightforward process compared to macOS, which can restrict your freedom in various ways.

It's important not to become too comfortable with a single platform. By exploring alternative operating systems like Ubuntu, you can embrace the open-source community and experience the freedom of choice. At times, big corporations may not always act in our best interest when it comes to protecting our personal data from government surveillance. That's where open-source software shines, giving us the opportunity to take control of our own digital lives.

Before we wrap up, I want to share a couple of additional resources if you decide to make the switch to Ubuntu on your MacBook Pro. If you want to get Bluetooth working, there's a handy script available on GitHub that you can use. And if you're also looking to get your camera working, there's a detailed guide available to help you install the necessary driver.

Well, that's all for today's episode of Continuous Improvement. I hope that this discussion on transitioning from macOS to Ubuntu has given you some valuable insights. Remember, don't be afraid to explore alternatives and continuously improve your work environment. Stay tuned for our next episode, where we'll tackle another exciting topic. Until then, keep striving for continuous improvement in all aspects of your life.

Setting Up MongoDB with Koa.js

Welcome back to another episode of Continuous Improvement, the podcast where we explore the world of software development and find ways to level up our coding skills. I'm your host, Victor. In today's episode, we're going to dive into connecting a Koa.js server to a MongoDB database. If you're ready to learn, let's get started!

Before we begin, make sure you have Koa.js and MongoDB installed. Once that's done, let's jump right into the steps.

Step one, connect to the database before initializing the Koa app. To do this, you'll need to create a database.js file. Inside that file, import Mongoose, an Object Data Modeling (ODM) library, and your connection string from the configuration file. Remember to install Mongoose by running npm install --save mongoose.

const mongoose = require('mongoose');
import { connectionString } from './conf/app-config';

const initDB = () => {
  mongoose.connect(connectionString);

  mongoose.connection.once('open', () => {
    console.log('Connected to the database');
  });

  mongoose.connection.on('error', console.error);
};

module.exports = initDB;

Step two, create a schema in Koa. For example, let's create a user schema inside the /models/users.js file.

const mongoose = require('mongoose');
const Schema = mongoose.Schema;

const UserSchema = new Schema({
  username: String,
  email: String,
  picture: String
});

module.exports = mongoose.model('User', UserSchema);

Step three, create a service to query the data. In this example, we'll create a /service/user.service.js file.

import User from '../models/users';

export const getUserFromDb = async (username) => {
  const data = await User.findOne({ username });
  return data;
};

export const createUserInDb = async (user) => {
  const newUser = new User(user);
  await newUser.save();
  return user;
};

And finally, step four, call the service in the Koa controller. For instance, let's say we have a /controller/user.controller.js file.

import { getUserFromDb, createUserInDb } from '../service/user.service';

static async getUser(ctx) {
  const user = await getUserFromDb(ctx.query.username);
  ctx.body = user;
}

static async registerUser(ctx) {
  const user = await createUserInDb(ctx.request.body);
  ctx.body = user;
}

And there you have it! By following these steps, you should be able to connect your Koa.js server to a MongoDB database. If you have any questions or need further assistance, feel free to reach out.

That's it for today's episode of Continuous Improvement. I hope you found this information helpful in your journey as a developer. Don't forget to subscribe to our podcast for more valuable insights and tips. Until next time, happy coding!