Show HN: Zingg – open-source entity resolution for single source of truth https://ift.tt/auC1cXH

Show HN: Zingg – open-source entity resolution for single source of truth Hello HN, I am Sonal, a data consultant from India. For the past few months(and years!), I have been working on an entity resolution tool to build a single source of truth for customers, suppliers, products and parts. Here is a short demo of Zingg in action https://www.youtube.com/watch?v=zOabyZxN9b0 As a data consultant, I often struggled to build unified views of core entities on the datalake and the warehouse. Data spread across different systems has variations and consistencies making Customer 360, KYC, AML, segmentation, personalization and other analytics difficult. As I talked with different clients facing this issue, I searched for existing solutions which I could use or recommend. Unfortunately, most of them were very expensive MDM solutions like Tamr, or CDP solutions like Amperity. There were many open source libraries, but they did not tie well into the datalake/warehouse scenarios we were working with, did not scale and/or needed a decent bit of programming or did not generalize. I even tried to build something internally and failed miserably, and that got me hooked :-) As I dug deeper into the problem, I realized that there were multiple challenges. Data matching, at its very core, becomes a cartesian join, as you need to compare every pair of records to figure out the matches. With millions of records, this becomes extremely tough to scale. I referred to various research papers and then implemented a blocking algorithm to overcome this. More details at https://ift.tt/6o5PFhz The second challenge was to say which pairs are a match. I wanted to have a machine learning-based approach to handle the different types of entities and the variety of differences in real world data. But I also felt that non ML experts should be able to use Zingg easily, hence took the approach of abstracting the feature generation and hyper-parameter tuning for the classifier. Once I settled on the ML approach, the problem of training data quickly arose, which led me to pick up active learning and build an interactive labeler through which sample records can be marked as matches and non matches to build training sets quickly. I still feel that we should have an unsupervised approach as well, but I have not yet figured out the right way to do so. The Zingg repository is hosted at https://ift.tt/YKIHOz9 and we have close to 60 members on our Slack(https://ift.tt/96rscu8). We are now two developers working full time on Zingg!!! I am super happy that early users have been able to use Zingg and push us to build more stuff - model documentation, using pre-existing training data, native Snowflake integration etc. I have been an open source consumer all my dev life, and this is the first time I have made a decent contribution. It is my first time trying to build a community as well. Not sure how the future will unfold, but wanted to reach out to the community here and hear what you think about the problem, the approach, any ideas or suggestions. Thanks for reading along, and please do post your thoughts in the comments below. February 9, 2022 at 10:40PM

Komentar

Postingan populer dari blog ini

Show HN: Interactive exercises for GNU grep, sed and awk https://ift.tt/OxeFwah

Show HN: My Book Bulletproof TLS and PKI (Second Edition) Is Out https://ift.tt/5PZ9mxF