1. Facebook Markup Changes:
I never denied the fact that Facebook changes their HTML structure—I clearly mentioned it
wouldn't happen much. This was discussed during our initial agreement. The user agreed to this understanding and proceeded with the deal knowing that
dynamic platforms like Facebook often make changes.
The claim that the script is "useless" is overstated. The script
worked fine as delivered unless Facebook decides to make some changes. That is just the
nature of web scraping, which needs updates over time as the platform evolves. If that wasn't good enough, the user
should not have agreed to the terms.
2. Using CSS Selectors:
The script was created with CSS selectors for the following reasons:
It's an
efficient and standard method for scraping.
The user
never specified any other preferred methods like XPath or attribute-based scraping during the initial agreement.
If they had requested alternative methods like XPath or API exploitation, I would have considered that during development. However, they
didn't mention this until now, long after the delivery. Changing the approach to meet new demands now requires
extra work, which wasn't part of our agreement.
3. Refactoring the Script:
Refactoring the script to use more stable methods like XPath or detecting JSON endpoints is a
significant update. While I’m happy to make
small fixes to ensure functionality, this level of refactoring is
essentially a rewrite of the script.
The user claims this doesn’t require “remaking the entire script,” but in reality, it involves:
Changing how elements are located.
Identifying entirely
new selectors or methods.
Testing and debugging to ensure reliability with Facebook’s ever-changing structure.
This goes far beyond a simple adjustment and isn’t something I can do for free.
If they need this level of work, they better find someone else.
4. Current Script:
The script has been implemented with the
agreed-upon methods and works under the conditions discussed during development. As per the claim, it isn't "useless." It
worked as intended once delivered. The changes made by Facebook now are irrelevant to the fact that it was poorly designed; the
nature of scraping is always dynamic, which the user was already aware of.
5. Current Situation and Updates:
I will inform the user that the
developer working on this project currently has COVID and, due to this, is behind on responses. The user
was informed about this situation before opening this dispute, so they knew beforehand.
That still doesn't stop us from working on updates, and we’re committed to providing those updates and
minor fixes wherever possible. An
update will be provided as soon as one is available.
The user cannot dictate how the script should be built after the project has been completed. If they had specific requirements or methods in mind, those should have been communicated during the
initial agreement. We followed
exactly what the user asked for and what they agreed upon during our discussions.
It’s important to note that Facebook posts
don’t have textual elements or unique identifiers beyond the classes. These are
dynamic, unique posts with no other reliable identifiers available. CSS classes were the appropriate method based on the structure of the content and the user’s requirements.
The suggestion to use “textual patterns” or “other methods” isn’t feasible in this case, as these simply
don’t exist in the data being scraped. If the user wishes to explore alternative approaches, that would fall under a
new scope of work, as these ideas were not part of the original agreement.
We are happy to make
small adjustments within reason, but we cannot redesign the script based on new expectations that weren’t discussed or agreed upon initially.